2021-11-19 21:16:58 +00:00
|
|
|
use crate::{
|
2023-03-02 08:21:21 +00:00
|
|
|
environment_map, prepass, EnvironmentMapLight, FogMeta, GlobalLightMeta, GpuFog, GpuLights,
|
Temporal Antialiasing (TAA) (#7291)
![image](https://user-images.githubusercontent.com/47158642/214374911-412f0986-3927-4f7a-9a6c-413bdee6b389.png)
# Objective
- Implement an alternative antialias technique
- TAA scales based off of view resolution, not geometry complexity
- TAA filters textures, firefly pixels, and other aliasing not covered
by MSAA
- TAA additionally will reduce noise / increase quality in future
stochastic rendering techniques
- Closes https://github.com/bevyengine/bevy/issues/3663
## Solution
- Add a temporal jitter component
- Add a motion vector prepass
- Add a TemporalAntialias component and plugin
- Combine existing MSAA and FXAA examples and add TAA
## Followup Work
- Prepass motion vector support for skinned meshes
- Move uniforms needed for motion vectors into a separate bind group,
instead of using different bind group layouts
- Reuse previous frame's GPU view buffer for motion vectors, instead of
recomputing
- Mip biasing for sharper textures, and or unjitter texture UVs
https://github.com/bevyengine/bevy/issues/7323
- Compute shader for better performance
- Investigate FSR techniques
- Historical depth based disocclusion tests, for geometry disocclusion
- Historical luminance/hue based tests, for shading disocclusion
- Pixel "locks" to reduce blending rate / revamp history confidence
mechanism
- Orthographic camera support for TemporalJitter
- Figure out COD's 1-tap bicubic filter
---
## Changelog
- Added MotionVectorPrepass and TemporalJitter
- Added TemporalAntialiasPlugin, TemporalAntialiasBundle, and
TemporalAntialiasSettings
---------
Co-authored-by: IceSentry <c.giguere42@gmail.com>
Co-authored-by: IceSentry <IceSentry@users.noreply.github.com>
Co-authored-by: Robert Swain <robert.swain@gmail.com>
Co-authored-by: Daniel Chia <danstryder@gmail.com>
Co-authored-by: robtfm <50659922+robtfm@users.noreply.github.com>
Co-authored-by: Brandon Dyer <brandondyer64@gmail.com>
Co-authored-by: Edgar Geier <geieredgar@gmail.com>
2023-03-27 22:22:40 +00:00
|
|
|
GpuPointLights, LightMeta, NotShadowCaster, NotShadowReceiver, PreviousGlobalTransform,
|
Reorder render sets, refactor bevy_sprite to take advantage (#9236)
This is a continuation of this PR: #8062
# Objective
- Reorder render schedule sets to allow data preparation when phase item
order is known to support improved batching
- Part of the batching/instancing etc plan from here:
https://github.com/bevyengine/bevy/issues/89#issuecomment-1379249074
- The original idea came from @inodentry and proved to be a good one.
Thanks!
- Refactor `bevy_sprite` and `bevy_ui` to take advantage of the new
ordering
## Solution
- Move `Prepare` and `PrepareFlush` after `PhaseSortFlush`
- Add a `PrepareAssets` set that runs in parallel with other systems and
sets in the render schedule.
- Put prepare_assets systems in the `PrepareAssets` set
- If explicit dependencies are needed on Mesh or Material RenderAssets
then depend on the appropriate system.
- Add `ManageViews` and `ManageViewsFlush` sets between
`ExtractCommands` and Queue
- Move `queue_mesh*_bind_group` to the Prepare stage
- Rename them to `prepare_`
- Put systems that prepare resources (buffers, textures, etc.) into a
`PrepareResources` set inside `Prepare`
- Put the `prepare_..._bind_group` systems into a `PrepareBindGroup` set
after `PrepareResources`
- Move `prepare_lights` to the `ManageViews` set
- `prepare_lights` creates views and this must happen before `Queue`
- This system needs refactoring to stop handling all responsibilities
- Gather lights, sort, and create shadow map views. Store sorted light
entities in a resource
- Remove `BatchedPhaseItem`
- Replace `batch_range` with `batch_size` representing how many items to
skip after rendering the item or to skip the item entirely if
`batch_size` is 0.
- `queue_sprites` has been split into `queue_sprites` for queueing phase
items and `prepare_sprites` for batching after the `PhaseSort`
- `PhaseItem`s are still inserted in `queue_sprites`
- After sorting adjacent compatible sprite phase items are accumulated
into `SpriteBatch` components on the first entity of each batch,
containing a range of vertex indices. The associated `PhaseItem`'s
`batch_size` is updated appropriately.
- `SpriteBatch` items are then drawn skipping over the other items in
the batch based on the value in `batch_size`
- A very similar refactor was performed on `bevy_ui`
---
## Changelog
Changed:
- Reordered and reworked render app schedule sets. The main change is
that data is extracted, queued, sorted, and then prepared when the order
of data is known.
- Refactor `bevy_sprite` and `bevy_ui` to take advantage of the
reordering.
## Migration Guide
- Assets such as materials and meshes should now be created in
`PrepareAssets` e.g. `prepare_assets<Mesh>`
- Queueing entities to `RenderPhase`s continues to be done in `Queue`
e.g. `queue_sprites`
- Preparing resources (textures, buffers, etc.) should now be done in
`PrepareResources`, e.g. `prepare_prepass_textures`,
`prepare_mesh_uniforms`
- Prepare bind groups should now be done in `PrepareBindGroups` e.g.
`prepare_mesh_bind_group`
- Any batching or instancing can now be done in `Prepare` where the
order of the phase items is known e.g. `prepare_sprites`
## Next Steps
- Introduce some generic mechanism to ensure items that can be batched
are grouped in the phase item order, currently you could easily have
`[sprite at z 0, mesh at z 0, sprite at z 0]` preventing batching.
- Investigate improved orderings for building the MeshUniform buffer
- Implementing batching across the rest of bevy
---------
Co-authored-by: Robert Swain <robert.swain@gmail.com>
Co-authored-by: robtfm <50659922+robtfm@users.noreply.github.com>
2023-08-27 14:33:49 +00:00
|
|
|
ScreenSpaceAmbientOcclusionTextures, Shadow, ShadowSamplers, ViewClusterBindings,
|
|
|
|
ViewFogUniformOffset, ViewLightsUniformOffset, ViewShadowBindings,
|
|
|
|
CLUSTERED_FORWARD_STORAGE_BUFFER_COUNT, MAX_CASCADES_PER_LIGHT, MAX_DIRECTIONAL_LIGHTS,
|
2021-11-19 21:16:58 +00:00
|
|
|
};
|
2023-03-18 01:45:34 +00:00
|
|
|
use bevy_app::Plugin;
|
Add morph targets (#8158)
# Objective
- Add morph targets to `bevy_pbr` (closes #5756) & load them from glTF
- Supersedes #3722
- Fixes #6814
[Morph targets][1] (also known as shape interpolation, shape keys, or
blend shapes) allow animating individual vertices with fine grained
controls. This is typically used for facial expressions. By specifying
multiple poses as vertex offset, and providing a set of weight of each
pose, it is possible to define surprisingly realistic transitions
between poses. Blending between multiple poses also allow composition.
Morph targets are part of the [gltf standard][2] and are a feature of
Unity and Unreal, and babylone.js, it is only natural to implement them
in bevy.
## Solution
This implementation of morph targets uses a 3d texture where each pixel
is a component of an animated attribute. Each layer is a different
target. We use a 2d texture for each target, because the number of
attribute×components×animated vertices is expected to always exceed the
maximum pixel row size limit of webGL2. It copies fairly closely the way
skinning is implemented on the CPU side, while on the GPU side, the
shader morph target implementation is a relatively trivial detail.
We add an optional `morph_texture` to the `Mesh` struct. The
`morph_texture` is built through a method that accepts an iterator over
attribute buffers.
The `MorphWeights` component, user-accessible, controls the blend of
poses used by mesh instances (so that multiple copy of the same mesh may
have different weights), all the weights are uploaded to a uniform
buffer of 256 `f32`. We limit to 16 poses per mesh, and a total of 256
poses.
More literature:
* Old babylone.js implementation (vertex attribute-based):
https://www.eternalcoding.com/dev-log-1-morph-targets/
* Babylone.js implementation (similar to ours):
https://www.youtube.com/watch?v=LBPRmGgU0PE
* GPU gems 3:
https://developer.nvidia.com/gpugems/gpugems3/part-i-geometry/chapter-3-directx-10-blend-shapes-breaking-limits
* Development discord thread
https://discord.com/channels/691052431525675048/1083325980615114772
https://user-images.githubusercontent.com/26321040/231181046-3bca2ab2-d4d9-472e-8098-639f1871ce2e.mp4
https://github.com/bevyengine/bevy/assets/26321040/d2a0c544-0ef8-45cf-9f99-8c3792f5a258
## Acknowledgements
* Thanks to `storytold` for sponsoring the feature
* Thanks to `superdump` and `james7132` for guidance and help figuring
out stuff
## Future work
- Handling of less and more attributes (eg: animated uv, animated
arbitrary attributes)
- Dynamic pose allocation (so that zero-weighted poses aren't uploaded
to GPU for example, enables much more total poses)
- Better animation API, see #8357
----
## Changelog
- Add morph targets to bevy meshes
- Support up to 64 poses per mesh of individually up to 116508 vertices,
animation currently strictly limited to the position, normal and tangent
attributes.
- Load a morph target using `Mesh::set_morph_targets`
- Add `VisitMorphTargets` and `VisitMorphAttributes` traits to
`bevy_render`, this allows defining morph targets (a fairly complex and
nested data structure) through iterators (ie: single copy instead of
passing around buffers), see documentation of those traits for details
- Add `MorphWeights` component exported by `bevy_render`
- `MorphWeights` control mesh's morph target weights, blending between
various poses defined as morph targets.
- `MorphWeights` are directly inherited by direct children (single level
of hierarchy) of an entity. This allows controlling several mesh
primitives through a unique entity _as per GLTF spec_.
- Add `MorphTargetNames` component, naming each indices of loaded morph
targets.
- Load morph targets weights and buffers in `bevy_gltf`
- handle morph targets animations in `bevy_animation` (previously, it
was a `warn!` log)
- Add the `MorphStressTest.gltf` asset for morph targets testing, taken
from the glTF samples repo, CC0.
- Add morph target manipulation to `scene_viewer`
- Separate the animation code in `scene_viewer` from the rest of the
code, reducing `#[cfg(feature)]` noise
- Add the `morph_targets.rs` example to show off how to manipulate morph
targets, loading `MorpStressTest.gltf`
## Migration Guide
- (very specialized, unlikely to be touched by 3rd parties)
- `MeshPipeline` now has a single `mesh_layouts` field rather than
separate `mesh_layout` and `skinned_mesh_layout` fields. You should
handle all possible mesh bind group layouts in your implementation
- You should also handle properly the new `MORPH_TARGETS` shader def and
mesh pipeline key. A new function is exposed to make this easier:
`setup_moprh_and_skinning_defs`
- The `MeshBindGroup` is now `MeshBindGroups`, cached bind groups are
now accessed through the `get` method.
[1]: https://en.wikipedia.org/wiki/Morph_target_animation
[2]:
https://registry.khronos.org/glTF/specs/2.0/glTF-2.0.html#morph-targets
---------
Co-authored-by: François <mockersf@gmail.com>
Co-authored-by: Carter Anderson <mcanders1@gmail.com>
2023-06-22 20:00:01 +00:00
|
|
|
use bevy_asset::{load_internal_asset, Assets, Handle, HandleId, HandleUntyped};
|
2023-02-19 20:38:13 +00:00
|
|
|
use bevy_core_pipeline::{
|
Reorder render sets, refactor bevy_sprite to take advantage (#9236)
This is a continuation of this PR: #8062
# Objective
- Reorder render schedule sets to allow data preparation when phase item
order is known to support improved batching
- Part of the batching/instancing etc plan from here:
https://github.com/bevyengine/bevy/issues/89#issuecomment-1379249074
- The original idea came from @inodentry and proved to be a good one.
Thanks!
- Refactor `bevy_sprite` and `bevy_ui` to take advantage of the new
ordering
## Solution
- Move `Prepare` and `PrepareFlush` after `PhaseSortFlush`
- Add a `PrepareAssets` set that runs in parallel with other systems and
sets in the render schedule.
- Put prepare_assets systems in the `PrepareAssets` set
- If explicit dependencies are needed on Mesh or Material RenderAssets
then depend on the appropriate system.
- Add `ManageViews` and `ManageViewsFlush` sets between
`ExtractCommands` and Queue
- Move `queue_mesh*_bind_group` to the Prepare stage
- Rename them to `prepare_`
- Put systems that prepare resources (buffers, textures, etc.) into a
`PrepareResources` set inside `Prepare`
- Put the `prepare_..._bind_group` systems into a `PrepareBindGroup` set
after `PrepareResources`
- Move `prepare_lights` to the `ManageViews` set
- `prepare_lights` creates views and this must happen before `Queue`
- This system needs refactoring to stop handling all responsibilities
- Gather lights, sort, and create shadow map views. Store sorted light
entities in a resource
- Remove `BatchedPhaseItem`
- Replace `batch_range` with `batch_size` representing how many items to
skip after rendering the item or to skip the item entirely if
`batch_size` is 0.
- `queue_sprites` has been split into `queue_sprites` for queueing phase
items and `prepare_sprites` for batching after the `PhaseSort`
- `PhaseItem`s are still inserted in `queue_sprites`
- After sorting adjacent compatible sprite phase items are accumulated
into `SpriteBatch` components on the first entity of each batch,
containing a range of vertex indices. The associated `PhaseItem`'s
`batch_size` is updated appropriately.
- `SpriteBatch` items are then drawn skipping over the other items in
the batch based on the value in `batch_size`
- A very similar refactor was performed on `bevy_ui`
---
## Changelog
Changed:
- Reordered and reworked render app schedule sets. The main change is
that data is extracted, queued, sorted, and then prepared when the order
of data is known.
- Refactor `bevy_sprite` and `bevy_ui` to take advantage of the
reordering.
## Migration Guide
- Assets such as materials and meshes should now be created in
`PrepareAssets` e.g. `prepare_assets<Mesh>`
- Queueing entities to `RenderPhase`s continues to be done in `Queue`
e.g. `queue_sprites`
- Preparing resources (textures, buffers, etc.) should now be done in
`PrepareResources`, e.g. `prepare_prepass_textures`,
`prepare_mesh_uniforms`
- Prepare bind groups should now be done in `PrepareBindGroups` e.g.
`prepare_mesh_bind_group`
- Any batching or instancing can now be done in `Prepare` where the
order of the phase items is known e.g. `prepare_sprites`
## Next Steps
- Introduce some generic mechanism to ensure items that can be batched
are grouped in the phase item order, currently you could easily have
`[sprite at z 0, mesh at z 0, sprite at z 0]` preventing batching.
- Investigate improved orderings for building the MeshUniform buffer
- Implementing batching across the rest of bevy
---------
Co-authored-by: Robert Swain <robert.swain@gmail.com>
Co-authored-by: robtfm <50659922+robtfm@users.noreply.github.com>
2023-08-27 14:33:49 +00:00
|
|
|
core_3d::{AlphaMask3d, Opaque3d, Transparent3d},
|
2023-02-19 20:38:13 +00:00
|
|
|
prepass::ViewPrepassTextures,
|
|
|
|
tonemapping::{
|
|
|
|
get_lut_bind_group_layout_entries, get_lut_bindings, Tonemapping, TonemappingLuts,
|
|
|
|
},
|
|
|
|
};
|
Shader Imports. Decouple Mesh logic from PBR (#3137)
## Shader Imports
This adds "whole file" shader imports. These come in two flavors:
### Asset Path Imports
```rust
// /assets/shaders/custom.wgsl
#import "shaders/custom_material.wgsl"
[[stage(fragment)]]
fn fragment() -> [[location(0)]] vec4<f32> {
return get_color();
}
```
```rust
// /assets/shaders/custom_material.wgsl
[[block]]
struct CustomMaterial {
color: vec4<f32>;
};
[[group(1), binding(0)]]
var<uniform> material: CustomMaterial;
```
### Custom Path Imports
Enables defining custom import paths. These are intended to be used by crates to export shader functionality:
```rust
// bevy_pbr2/src/render/pbr.wgsl
#import bevy_pbr::mesh_view_bind_group
#import bevy_pbr::mesh_bind_group
[[block]]
struct StandardMaterial {
base_color: vec4<f32>;
emissive: vec4<f32>;
perceptual_roughness: f32;
metallic: f32;
reflectance: f32;
flags: u32;
};
/* rest of PBR fragment shader here */
```
```rust
impl Plugin for MeshRenderPlugin {
fn build(&self, app: &mut bevy_app::App) {
let mut shaders = app.world.get_resource_mut::<Assets<Shader>>().unwrap();
shaders.set_untracked(
MESH_BIND_GROUP_HANDLE,
Shader::from_wgsl(include_str!("mesh_bind_group.wgsl"))
.with_import_path("bevy_pbr::mesh_bind_group"),
);
shaders.set_untracked(
MESH_VIEW_BIND_GROUP_HANDLE,
Shader::from_wgsl(include_str!("mesh_view_bind_group.wgsl"))
.with_import_path("bevy_pbr::mesh_view_bind_group"),
);
```
By convention these should use rust-style module paths that start with the crate name. Ultimately we might enforce this convention.
Note that this feature implements _run time_ import resolution. Ultimately we should move the import logic into an asset preprocessor once Bevy gets support for that.
## Decouple Mesh Logic from PBR Logic via MeshRenderPlugin
This breaks out mesh rendering code from PBR material code, which improves the legibility of the code, decouples mesh logic from PBR logic, and opens the door for a future `MaterialPlugin<T: Material>` that handles all of the pipeline setup for arbitrary shader materials.
## Removed `RenderAsset<Shader>` in favor of extracting shaders into RenderPipelineCache
This simplifies the shader import implementation and removes the need to pass around `RenderAssets<Shader>`.
## RenderCommands are now fallible
This allows us to cleanly handle pipelines+shaders not being ready yet. We can abort a render command early in these cases, preventing bevy from trying to bind group / do draw calls for pipelines that couldn't be bound. This could also be used in the future for things like "components not existing on entities yet".
# Next Steps
* Investigate using Naga for "partial typed imports" (ex: `#import bevy_pbr::material::StandardMaterial`, which would import only the StandardMaterial struct)
* Implement `MaterialPlugin<T: Material>` for low-boilerplate custom material shaders
* Move shader import logic into the asset preprocessor once bevy gets support for that.
Fixes #3132
2021-11-18 03:45:02 +00:00
|
|
|
use bevy_ecs::{
|
|
|
|
prelude::*,
|
2023-01-04 01:13:30 +00:00
|
|
|
query::ROQueryItem,
|
2022-06-11 09:13:37 +00:00
|
|
|
system::{lifetimeless::*, SystemParamItem, SystemState},
|
Shader Imports. Decouple Mesh logic from PBR (#3137)
## Shader Imports
This adds "whole file" shader imports. These come in two flavors:
### Asset Path Imports
```rust
// /assets/shaders/custom.wgsl
#import "shaders/custom_material.wgsl"
[[stage(fragment)]]
fn fragment() -> [[location(0)]] vec4<f32> {
return get_color();
}
```
```rust
// /assets/shaders/custom_material.wgsl
[[block]]
struct CustomMaterial {
color: vec4<f32>;
};
[[group(1), binding(0)]]
var<uniform> material: CustomMaterial;
```
### Custom Path Imports
Enables defining custom import paths. These are intended to be used by crates to export shader functionality:
```rust
// bevy_pbr2/src/render/pbr.wgsl
#import bevy_pbr::mesh_view_bind_group
#import bevy_pbr::mesh_bind_group
[[block]]
struct StandardMaterial {
base_color: vec4<f32>;
emissive: vec4<f32>;
perceptual_roughness: f32;
metallic: f32;
reflectance: f32;
flags: u32;
};
/* rest of PBR fragment shader here */
```
```rust
impl Plugin for MeshRenderPlugin {
fn build(&self, app: &mut bevy_app::App) {
let mut shaders = app.world.get_resource_mut::<Assets<Shader>>().unwrap();
shaders.set_untracked(
MESH_BIND_GROUP_HANDLE,
Shader::from_wgsl(include_str!("mesh_bind_group.wgsl"))
.with_import_path("bevy_pbr::mesh_bind_group"),
);
shaders.set_untracked(
MESH_VIEW_BIND_GROUP_HANDLE,
Shader::from_wgsl(include_str!("mesh_view_bind_group.wgsl"))
.with_import_path("bevy_pbr::mesh_view_bind_group"),
);
```
By convention these should use rust-style module paths that start with the crate name. Ultimately we might enforce this convention.
Note that this feature implements _run time_ import resolution. Ultimately we should move the import logic into an asset preprocessor once Bevy gets support for that.
## Decouple Mesh Logic from PBR Logic via MeshRenderPlugin
This breaks out mesh rendering code from PBR material code, which improves the legibility of the code, decouples mesh logic from PBR logic, and opens the door for a future `MaterialPlugin<T: Material>` that handles all of the pipeline setup for arbitrary shader materials.
## Removed `RenderAsset<Shader>` in favor of extracting shaders into RenderPipelineCache
This simplifies the shader import implementation and removes the need to pass around `RenderAssets<Shader>`.
## RenderCommands are now fallible
This allows us to cleanly handle pipelines+shaders not being ready yet. We can abort a render command early in these cases, preventing bevy from trying to bind group / do draw calls for pipelines that couldn't be bound. This could also be used in the future for things like "components not existing on entities yet".
# Next Steps
* Investigate using Naga for "partial typed imports" (ex: `#import bevy_pbr::material::StandardMaterial`, which would import only the StandardMaterial struct)
* Implement `MaterialPlugin<T: Material>` for low-boilerplate custom material shaders
* Move shader import logic into the asset preprocessor once bevy gets support for that.
Fixes #3132
2021-11-18 03:45:02 +00:00
|
|
|
};
|
Reduce the size of MeshUniform to improve performance (#9416)
# Objective
- Significantly reduce the size of MeshUniform by only including
necessary data.
## Solution
Local to world, model transforms are affine. This means they only need a
4x3 matrix to represent them.
`MeshUniform` stores the current, and previous model transforms, and the
inverse transpose of the current model transform, all as 4x4 matrices.
Instead we can store the current, and previous model transforms as 4x3
matrices, and we only need the upper-left 3x3 part of the inverse
transpose of the current model transform. This change allows us to
reduce the serialized MeshUniform size from 208 bytes to 144 bytes,
which is over a 30% saving in data to serialize, and VRAM bandwidth and
space.
## Benchmarks
On an M1 Max, running `many_cubes -- sphere`, main is in yellow, this PR
is in red:
<img width="1484" alt="Screenshot 2023-08-11 at 02 36 43"
src="https://github.com/bevyengine/bevy/assets/302146/7d99c7b3-f2bb-4004-a8d0-4c00f755cb0d">
A reduction in frame time of ~14%.
---
## Changelog
- Changed: Redefined `MeshUniform` to improve performance by using 4x3
affine transforms and reconstructing 4x4 matrices in the shader. Helper
functions were added to `bevy_pbr::mesh_functions` to unpack the data.
`affine_to_square` converts the packed 4x3 in 3x4 matrix data to a 4x4
matrix. `mat2x4_f32_to_mat3x3` converts the 3x3 in mat2x4 + f32 matrix
data back into a 3x3.
## Migration Guide
Shader code before:
```
var model = mesh[instance_index].model;
```
Shader code after:
```
#import bevy_pbr::mesh_functions affine_to_square
var model = affine_to_square(mesh[instance_index].model);
```
2023-08-15 06:00:23 +00:00
|
|
|
use bevy_math::{Affine3, Affine3A, Mat4, Vec2, Vec3Swizzles, Vec4};
|
Shader Imports. Decouple Mesh logic from PBR (#3137)
## Shader Imports
This adds "whole file" shader imports. These come in two flavors:
### Asset Path Imports
```rust
// /assets/shaders/custom.wgsl
#import "shaders/custom_material.wgsl"
[[stage(fragment)]]
fn fragment() -> [[location(0)]] vec4<f32> {
return get_color();
}
```
```rust
// /assets/shaders/custom_material.wgsl
[[block]]
struct CustomMaterial {
color: vec4<f32>;
};
[[group(1), binding(0)]]
var<uniform> material: CustomMaterial;
```
### Custom Path Imports
Enables defining custom import paths. These are intended to be used by crates to export shader functionality:
```rust
// bevy_pbr2/src/render/pbr.wgsl
#import bevy_pbr::mesh_view_bind_group
#import bevy_pbr::mesh_bind_group
[[block]]
struct StandardMaterial {
base_color: vec4<f32>;
emissive: vec4<f32>;
perceptual_roughness: f32;
metallic: f32;
reflectance: f32;
flags: u32;
};
/* rest of PBR fragment shader here */
```
```rust
impl Plugin for MeshRenderPlugin {
fn build(&self, app: &mut bevy_app::App) {
let mut shaders = app.world.get_resource_mut::<Assets<Shader>>().unwrap();
shaders.set_untracked(
MESH_BIND_GROUP_HANDLE,
Shader::from_wgsl(include_str!("mesh_bind_group.wgsl"))
.with_import_path("bevy_pbr::mesh_bind_group"),
);
shaders.set_untracked(
MESH_VIEW_BIND_GROUP_HANDLE,
Shader::from_wgsl(include_str!("mesh_view_bind_group.wgsl"))
.with_import_path("bevy_pbr::mesh_view_bind_group"),
);
```
By convention these should use rust-style module paths that start with the crate name. Ultimately we might enforce this convention.
Note that this feature implements _run time_ import resolution. Ultimately we should move the import logic into an asset preprocessor once Bevy gets support for that.
## Decouple Mesh Logic from PBR Logic via MeshRenderPlugin
This breaks out mesh rendering code from PBR material code, which improves the legibility of the code, decouples mesh logic from PBR logic, and opens the door for a future `MaterialPlugin<T: Material>` that handles all of the pipeline setup for arbitrary shader materials.
## Removed `RenderAsset<Shader>` in favor of extracting shaders into RenderPipelineCache
This simplifies the shader import implementation and removes the need to pass around `RenderAssets<Shader>`.
## RenderCommands are now fallible
This allows us to cleanly handle pipelines+shaders not being ready yet. We can abort a render command early in these cases, preventing bevy from trying to bind group / do draw calls for pipelines that couldn't be bound. This could also be used in the future for things like "components not existing on entities yet".
# Next Steps
* Investigate using Naga for "partial typed imports" (ex: `#import bevy_pbr::material::StandardMaterial`, which would import only the StandardMaterial struct)
* Implement `MaterialPlugin<T: Material>` for low-boilerplate custom material shaders
* Move shader import logic into the asset preprocessor once bevy gets support for that.
Fixes #3132
2021-11-18 03:45:02 +00:00
|
|
|
use bevy_reflect::TypeUuid;
|
2021-12-14 03:58:23 +00:00
|
|
|
use bevy_render::{
|
2022-09-28 04:20:27 +00:00
|
|
|
globals::{GlobalsBuffer, GlobalsUniform},
|
2022-03-29 18:31:13 +00:00
|
|
|
mesh::{
|
|
|
|
skinning::{SkinnedMesh, SkinnedMeshInverseBindposes},
|
Add morph targets (#8158)
# Objective
- Add morph targets to `bevy_pbr` (closes #5756) & load them from glTF
- Supersedes #3722
- Fixes #6814
[Morph targets][1] (also known as shape interpolation, shape keys, or
blend shapes) allow animating individual vertices with fine grained
controls. This is typically used for facial expressions. By specifying
multiple poses as vertex offset, and providing a set of weight of each
pose, it is possible to define surprisingly realistic transitions
between poses. Blending between multiple poses also allow composition.
Morph targets are part of the [gltf standard][2] and are a feature of
Unity and Unreal, and babylone.js, it is only natural to implement them
in bevy.
## Solution
This implementation of morph targets uses a 3d texture where each pixel
is a component of an animated attribute. Each layer is a different
target. We use a 2d texture for each target, because the number of
attribute×components×animated vertices is expected to always exceed the
maximum pixel row size limit of webGL2. It copies fairly closely the way
skinning is implemented on the CPU side, while on the GPU side, the
shader morph target implementation is a relatively trivial detail.
We add an optional `morph_texture` to the `Mesh` struct. The
`morph_texture` is built through a method that accepts an iterator over
attribute buffers.
The `MorphWeights` component, user-accessible, controls the blend of
poses used by mesh instances (so that multiple copy of the same mesh may
have different weights), all the weights are uploaded to a uniform
buffer of 256 `f32`. We limit to 16 poses per mesh, and a total of 256
poses.
More literature:
* Old babylone.js implementation (vertex attribute-based):
https://www.eternalcoding.com/dev-log-1-morph-targets/
* Babylone.js implementation (similar to ours):
https://www.youtube.com/watch?v=LBPRmGgU0PE
* GPU gems 3:
https://developer.nvidia.com/gpugems/gpugems3/part-i-geometry/chapter-3-directx-10-blend-shapes-breaking-limits
* Development discord thread
https://discord.com/channels/691052431525675048/1083325980615114772
https://user-images.githubusercontent.com/26321040/231181046-3bca2ab2-d4d9-472e-8098-639f1871ce2e.mp4
https://github.com/bevyengine/bevy/assets/26321040/d2a0c544-0ef8-45cf-9f99-8c3792f5a258
## Acknowledgements
* Thanks to `storytold` for sponsoring the feature
* Thanks to `superdump` and `james7132` for guidance and help figuring
out stuff
## Future work
- Handling of less and more attributes (eg: animated uv, animated
arbitrary attributes)
- Dynamic pose allocation (so that zero-weighted poses aren't uploaded
to GPU for example, enables much more total poses)
- Better animation API, see #8357
----
## Changelog
- Add morph targets to bevy meshes
- Support up to 64 poses per mesh of individually up to 116508 vertices,
animation currently strictly limited to the position, normal and tangent
attributes.
- Load a morph target using `Mesh::set_morph_targets`
- Add `VisitMorphTargets` and `VisitMorphAttributes` traits to
`bevy_render`, this allows defining morph targets (a fairly complex and
nested data structure) through iterators (ie: single copy instead of
passing around buffers), see documentation of those traits for details
- Add `MorphWeights` component exported by `bevy_render`
- `MorphWeights` control mesh's morph target weights, blending between
various poses defined as morph targets.
- `MorphWeights` are directly inherited by direct children (single level
of hierarchy) of an entity. This allows controlling several mesh
primitives through a unique entity _as per GLTF spec_.
- Add `MorphTargetNames` component, naming each indices of loaded morph
targets.
- Load morph targets weights and buffers in `bevy_gltf`
- handle morph targets animations in `bevy_animation` (previously, it
was a `warn!` log)
- Add the `MorphStressTest.gltf` asset for morph targets testing, taken
from the glTF samples repo, CC0.
- Add morph target manipulation to `scene_viewer`
- Separate the animation code in `scene_viewer` from the rest of the
code, reducing `#[cfg(feature)]` noise
- Add the `morph_targets.rs` example to show off how to manipulate morph
targets, loading `MorpStressTest.gltf`
## Migration Guide
- (very specialized, unlikely to be touched by 3rd parties)
- `MeshPipeline` now has a single `mesh_layouts` field rather than
separate `mesh_layout` and `skinned_mesh_layout` fields. You should
handle all possible mesh bind group layouts in your implementation
- You should also handle properly the new `MORPH_TARGETS` shader def and
mesh pipeline key. A new function is exposed to make this easier:
`setup_moprh_and_skinning_defs`
- The `MeshBindGroup` is now `MeshBindGroups`, cached bind groups are
now accessed through the `get` method.
[1]: https://en.wikipedia.org/wiki/Morph_target_animation
[2]:
https://registry.khronos.org/glTF/specs/2.0/glTF-2.0.html#morph-targets
---------
Co-authored-by: François <mockersf@gmail.com>
Co-authored-by: Carter Anderson <mcanders1@gmail.com>
2023-06-22 20:00:01 +00:00
|
|
|
GpuBufferInfo, InnerMeshVertexBufferLayout, Mesh, MeshVertexBufferLayout,
|
|
|
|
VertexAttributeDescriptor,
|
2022-03-29 18:31:13 +00:00
|
|
|
},
|
2023-01-19 22:11:13 +00:00
|
|
|
prelude::Msaa,
|
Shader Imports. Decouple Mesh logic from PBR (#3137)
## Shader Imports
This adds "whole file" shader imports. These come in two flavors:
### Asset Path Imports
```rust
// /assets/shaders/custom.wgsl
#import "shaders/custom_material.wgsl"
[[stage(fragment)]]
fn fragment() -> [[location(0)]] vec4<f32> {
return get_color();
}
```
```rust
// /assets/shaders/custom_material.wgsl
[[block]]
struct CustomMaterial {
color: vec4<f32>;
};
[[group(1), binding(0)]]
var<uniform> material: CustomMaterial;
```
### Custom Path Imports
Enables defining custom import paths. These are intended to be used by crates to export shader functionality:
```rust
// bevy_pbr2/src/render/pbr.wgsl
#import bevy_pbr::mesh_view_bind_group
#import bevy_pbr::mesh_bind_group
[[block]]
struct StandardMaterial {
base_color: vec4<f32>;
emissive: vec4<f32>;
perceptual_roughness: f32;
metallic: f32;
reflectance: f32;
flags: u32;
};
/* rest of PBR fragment shader here */
```
```rust
impl Plugin for MeshRenderPlugin {
fn build(&self, app: &mut bevy_app::App) {
let mut shaders = app.world.get_resource_mut::<Assets<Shader>>().unwrap();
shaders.set_untracked(
MESH_BIND_GROUP_HANDLE,
Shader::from_wgsl(include_str!("mesh_bind_group.wgsl"))
.with_import_path("bevy_pbr::mesh_bind_group"),
);
shaders.set_untracked(
MESH_VIEW_BIND_GROUP_HANDLE,
Shader::from_wgsl(include_str!("mesh_view_bind_group.wgsl"))
.with_import_path("bevy_pbr::mesh_view_bind_group"),
);
```
By convention these should use rust-style module paths that start with the crate name. Ultimately we might enforce this convention.
Note that this feature implements _run time_ import resolution. Ultimately we should move the import logic into an asset preprocessor once Bevy gets support for that.
## Decouple Mesh Logic from PBR Logic via MeshRenderPlugin
This breaks out mesh rendering code from PBR material code, which improves the legibility of the code, decouples mesh logic from PBR logic, and opens the door for a future `MaterialPlugin<T: Material>` that handles all of the pipeline setup for arbitrary shader materials.
## Removed `RenderAsset<Shader>` in favor of extracting shaders into RenderPipelineCache
This simplifies the shader import implementation and removes the need to pass around `RenderAssets<Shader>`.
## RenderCommands are now fallible
This allows us to cleanly handle pipelines+shaders not being ready yet. We can abort a render command early in these cases, preventing bevy from trying to bind group / do draw calls for pipelines that couldn't be bound. This could also be used in the future for things like "components not existing on entities yet".
# Next Steps
* Investigate using Naga for "partial typed imports" (ex: `#import bevy_pbr::material::StandardMaterial`, which would import only the StandardMaterial struct)
* Implement `MaterialPlugin<T: Material>` for low-boilerplate custom material shaders
* Move shader import logic into the asset preprocessor once bevy gets support for that.
Fixes #3132
2021-11-18 03:45:02 +00:00
|
|
|
render_asset::RenderAssets,
|
Reorder render sets, refactor bevy_sprite to take advantage (#9236)
This is a continuation of this PR: #8062
# Objective
- Reorder render schedule sets to allow data preparation when phase item
order is known to support improved batching
- Part of the batching/instancing etc plan from here:
https://github.com/bevyengine/bevy/issues/89#issuecomment-1379249074
- The original idea came from @inodentry and proved to be a good one.
Thanks!
- Refactor `bevy_sprite` and `bevy_ui` to take advantage of the new
ordering
## Solution
- Move `Prepare` and `PrepareFlush` after `PhaseSortFlush`
- Add a `PrepareAssets` set that runs in parallel with other systems and
sets in the render schedule.
- Put prepare_assets systems in the `PrepareAssets` set
- If explicit dependencies are needed on Mesh or Material RenderAssets
then depend on the appropriate system.
- Add `ManageViews` and `ManageViewsFlush` sets between
`ExtractCommands` and Queue
- Move `queue_mesh*_bind_group` to the Prepare stage
- Rename them to `prepare_`
- Put systems that prepare resources (buffers, textures, etc.) into a
`PrepareResources` set inside `Prepare`
- Put the `prepare_..._bind_group` systems into a `PrepareBindGroup` set
after `PrepareResources`
- Move `prepare_lights` to the `ManageViews` set
- `prepare_lights` creates views and this must happen before `Queue`
- This system needs refactoring to stop handling all responsibilities
- Gather lights, sort, and create shadow map views. Store sorted light
entities in a resource
- Remove `BatchedPhaseItem`
- Replace `batch_range` with `batch_size` representing how many items to
skip after rendering the item or to skip the item entirely if
`batch_size` is 0.
- `queue_sprites` has been split into `queue_sprites` for queueing phase
items and `prepare_sprites` for batching after the `PhaseSort`
- `PhaseItem`s are still inserted in `queue_sprites`
- After sorting adjacent compatible sprite phase items are accumulated
into `SpriteBatch` components on the first entity of each batch,
containing a range of vertex indices. The associated `PhaseItem`'s
`batch_size` is updated appropriately.
- `SpriteBatch` items are then drawn skipping over the other items in
the batch based on the value in `batch_size`
- A very similar refactor was performed on `bevy_ui`
---
## Changelog
Changed:
- Reordered and reworked render app schedule sets. The main change is
that data is extracted, queued, sorted, and then prepared when the order
of data is known.
- Refactor `bevy_sprite` and `bevy_ui` to take advantage of the
reordering.
## Migration Guide
- Assets such as materials and meshes should now be created in
`PrepareAssets` e.g. `prepare_assets<Mesh>`
- Queueing entities to `RenderPhase`s continues to be done in `Queue`
e.g. `queue_sprites`
- Preparing resources (textures, buffers, etc.) should now be done in
`PrepareResources`, e.g. `prepare_prepass_textures`,
`prepare_mesh_uniforms`
- Prepare bind groups should now be done in `PrepareBindGroups` e.g.
`prepare_mesh_bind_group`
- Any batching or instancing can now be done in `Prepare` where the
order of the phase items is known e.g. `prepare_sprites`
## Next Steps
- Introduce some generic mechanism to ensure items that can be batched
are grouped in the phase item order, currently you could easily have
`[sprite at z 0, mesh at z 0, sprite at z 0]` preventing batching.
- Investigate improved orderings for building the MeshUniform buffer
- Implementing batching across the rest of bevy
---------
Co-authored-by: Robert Swain <robert.swain@gmail.com>
Co-authored-by: robtfm <50659922+robtfm@users.noreply.github.com>
2023-08-27 14:33:49 +00:00
|
|
|
render_phase::{PhaseItem, RenderCommand, RenderCommandResult, RenderPhase, TrackedRenderPass},
|
Migrate to encase from crevice (#4339)
# Objective
- Unify buffer APIs
- Also see #4272
## Solution
- Replace vendored `crevice` with `encase`
---
## Changelog
Changed `StorageBuffer`
Added `DynamicStorageBuffer`
Replaced `UniformVec` with `UniformBuffer`
Replaced `DynamicUniformVec` with `DynamicUniformBuffer`
## Migration Guide
### `StorageBuffer`
removed `set_body()`, `values()`, `values_mut()`, `clear()`, `push()`, `append()`
added `set()`, `get()`, `get_mut()`
### `UniformVec` -> `UniformBuffer`
renamed `uniform_buffer()` to `buffer()`
removed `len()`, `is_empty()`, `capacity()`, `push()`, `reserve()`, `clear()`, `values()`
added `set()`, `get()`
### `DynamicUniformVec` -> `DynamicUniformBuffer`
renamed `uniform_buffer()` to `buffer()`
removed `capacity()`, `reserve()`
Co-authored-by: Carter Anderson <mcanders1@gmail.com>
2022-05-18 21:09:21 +00:00
|
|
|
render_resource::*,
|
2022-10-26 20:13:59 +00:00
|
|
|
renderer::{RenderDevice, RenderQueue},
|
|
|
|
texture::{
|
EnvironmentMapLight, BRDF Improvements (#7051)
(Before)
![image](https://user-images.githubusercontent.com/47158642/213946111-15ec758f-1f1d-443c-b196-1fdcd4ae49da.png)
(After)
![image](https://user-images.githubusercontent.com/47158642/217051179-67381e73-dd44-461b-a2c7-87b0440ef8de.png)
![image](https://user-images.githubusercontent.com/47158642/212492404-524e4ad3-7837-4ed4-8b20-2abc276aa8e8.png)
# Objective
- Improve lighting; especially reflections.
- Closes https://github.com/bevyengine/bevy/issues/4581.
## Solution
- Implement environment maps, providing better ambient light.
- Add microfacet multibounce approximation for specular highlights from Filament.
- Occlusion is no longer incorrectly applied to direct lighting. It now only applies to diffuse indirect light. Unsure if it's also supposed to apply to specular indirect light - the glTF specification just says "indirect light". In the case of ambient occlusion, for instance, that's usually only calculated as diffuse though. For now, I'm choosing to apply this just to indirect diffuse light, and not specular.
- Modified the PBR example to use an environment map, and have labels.
- Added `FallbackImageCubemap`.
## Implementation
- IBL technique references can be found in environment_map.wgsl.
- It's more accurate to use a LUT for the scale/bias. Filament has a good reference on generating this LUT. For now, I just used an analytic approximation.
- For now, environment maps must first be prefiltered outside of bevy using a 3rd party tool. See the `EnvironmentMap` documentation.
- Eventually, we should have our own prefiltering code, so that we can have dynamically changing environment maps, as well as let users drop in an HDR image and use asset preprocessing to create the needed textures using only bevy.
---
## Changelog
- Added an `EnvironmentMapLight` camera component that adds additional ambient light to a scene.
- StandardMaterials will now appear brighter and more saturated at high roughness, due to internal material changes. This is more physically correct.
- Fixed StandardMaterial occlusion being incorrectly applied to direct lighting.
- Added `FallbackImageCubemap`.
Co-authored-by: IceSentry <c.giguere42@gmail.com>
Co-authored-by: James Liu <contact@jamessliu.com>
Co-authored-by: Rob Parrett <robparrett@gmail.com>
2023-02-09 16:46:32 +00:00
|
|
|
BevyDefault, DefaultImageSampler, FallbackImageCubemap, FallbackImagesDepth,
|
|
|
|
FallbackImagesMsaa, GpuImage, Image, ImageSampler, TextureFormatPixelInfo,
|
2022-10-26 20:13:59 +00:00
|
|
|
},
|
|
|
|
view::{ComputedVisibility, ViewTarget, ViewUniform, ViewUniformOffset, ViewUniforms},
|
2023-03-18 01:45:34 +00:00
|
|
|
Extract, ExtractSchedule, Render, RenderApp, RenderSet,
|
Shader Imports. Decouple Mesh logic from PBR (#3137)
## Shader Imports
This adds "whole file" shader imports. These come in two flavors:
### Asset Path Imports
```rust
// /assets/shaders/custom.wgsl
#import "shaders/custom_material.wgsl"
[[stage(fragment)]]
fn fragment() -> [[location(0)]] vec4<f32> {
return get_color();
}
```
```rust
// /assets/shaders/custom_material.wgsl
[[block]]
struct CustomMaterial {
color: vec4<f32>;
};
[[group(1), binding(0)]]
var<uniform> material: CustomMaterial;
```
### Custom Path Imports
Enables defining custom import paths. These are intended to be used by crates to export shader functionality:
```rust
// bevy_pbr2/src/render/pbr.wgsl
#import bevy_pbr::mesh_view_bind_group
#import bevy_pbr::mesh_bind_group
[[block]]
struct StandardMaterial {
base_color: vec4<f32>;
emissive: vec4<f32>;
perceptual_roughness: f32;
metallic: f32;
reflectance: f32;
flags: u32;
};
/* rest of PBR fragment shader here */
```
```rust
impl Plugin for MeshRenderPlugin {
fn build(&self, app: &mut bevy_app::App) {
let mut shaders = app.world.get_resource_mut::<Assets<Shader>>().unwrap();
shaders.set_untracked(
MESH_BIND_GROUP_HANDLE,
Shader::from_wgsl(include_str!("mesh_bind_group.wgsl"))
.with_import_path("bevy_pbr::mesh_bind_group"),
);
shaders.set_untracked(
MESH_VIEW_BIND_GROUP_HANDLE,
Shader::from_wgsl(include_str!("mesh_view_bind_group.wgsl"))
.with_import_path("bevy_pbr::mesh_view_bind_group"),
);
```
By convention these should use rust-style module paths that start with the crate name. Ultimately we might enforce this convention.
Note that this feature implements _run time_ import resolution. Ultimately we should move the import logic into an asset preprocessor once Bevy gets support for that.
## Decouple Mesh Logic from PBR Logic via MeshRenderPlugin
This breaks out mesh rendering code from PBR material code, which improves the legibility of the code, decouples mesh logic from PBR logic, and opens the door for a future `MaterialPlugin<T: Material>` that handles all of the pipeline setup for arbitrary shader materials.
## Removed `RenderAsset<Shader>` in favor of extracting shaders into RenderPipelineCache
This simplifies the shader import implementation and removes the need to pass around `RenderAssets<Shader>`.
## RenderCommands are now fallible
This allows us to cleanly handle pipelines+shaders not being ready yet. We can abort a render command early in these cases, preventing bevy from trying to bind group / do draw calls for pipelines that couldn't be bound. This could also be used in the future for things like "components not existing on entities yet".
# Next Steps
* Investigate using Naga for "partial typed imports" (ex: `#import bevy_pbr::material::StandardMaterial`, which would import only the StandardMaterial struct)
* Implement `MaterialPlugin<T: Material>` for low-boilerplate custom material shaders
* Move shader import logic into the asset preprocessor once bevy gets support for that.
Fixes #3132
2021-11-18 03:45:02 +00:00
|
|
|
};
|
|
|
|
use bevy_transform::components::GlobalTransform;
|
Add morph targets (#8158)
# Objective
- Add morph targets to `bevy_pbr` (closes #5756) & load them from glTF
- Supersedes #3722
- Fixes #6814
[Morph targets][1] (also known as shape interpolation, shape keys, or
blend shapes) allow animating individual vertices with fine grained
controls. This is typically used for facial expressions. By specifying
multiple poses as vertex offset, and providing a set of weight of each
pose, it is possible to define surprisingly realistic transitions
between poses. Blending between multiple poses also allow composition.
Morph targets are part of the [gltf standard][2] and are a feature of
Unity and Unreal, and babylone.js, it is only natural to implement them
in bevy.
## Solution
This implementation of morph targets uses a 3d texture where each pixel
is a component of an animated attribute. Each layer is a different
target. We use a 2d texture for each target, because the number of
attribute×components×animated vertices is expected to always exceed the
maximum pixel row size limit of webGL2. It copies fairly closely the way
skinning is implemented on the CPU side, while on the GPU side, the
shader morph target implementation is a relatively trivial detail.
We add an optional `morph_texture` to the `Mesh` struct. The
`morph_texture` is built through a method that accepts an iterator over
attribute buffers.
The `MorphWeights` component, user-accessible, controls the blend of
poses used by mesh instances (so that multiple copy of the same mesh may
have different weights), all the weights are uploaded to a uniform
buffer of 256 `f32`. We limit to 16 poses per mesh, and a total of 256
poses.
More literature:
* Old babylone.js implementation (vertex attribute-based):
https://www.eternalcoding.com/dev-log-1-morph-targets/
* Babylone.js implementation (similar to ours):
https://www.youtube.com/watch?v=LBPRmGgU0PE
* GPU gems 3:
https://developer.nvidia.com/gpugems/gpugems3/part-i-geometry/chapter-3-directx-10-blend-shapes-breaking-limits
* Development discord thread
https://discord.com/channels/691052431525675048/1083325980615114772
https://user-images.githubusercontent.com/26321040/231181046-3bca2ab2-d4d9-472e-8098-639f1871ce2e.mp4
https://github.com/bevyengine/bevy/assets/26321040/d2a0c544-0ef8-45cf-9f99-8c3792f5a258
## Acknowledgements
* Thanks to `storytold` for sponsoring the feature
* Thanks to `superdump` and `james7132` for guidance and help figuring
out stuff
## Future work
- Handling of less and more attributes (eg: animated uv, animated
arbitrary attributes)
- Dynamic pose allocation (so that zero-weighted poses aren't uploaded
to GPU for example, enables much more total poses)
- Better animation API, see #8357
----
## Changelog
- Add morph targets to bevy meshes
- Support up to 64 poses per mesh of individually up to 116508 vertices,
animation currently strictly limited to the position, normal and tangent
attributes.
- Load a morph target using `Mesh::set_morph_targets`
- Add `VisitMorphTargets` and `VisitMorphAttributes` traits to
`bevy_render`, this allows defining morph targets (a fairly complex and
nested data structure) through iterators (ie: single copy instead of
passing around buffers), see documentation of those traits for details
- Add `MorphWeights` component exported by `bevy_render`
- `MorphWeights` control mesh's morph target weights, blending between
various poses defined as morph targets.
- `MorphWeights` are directly inherited by direct children (single level
of hierarchy) of an entity. This allows controlling several mesh
primitives through a unique entity _as per GLTF spec_.
- Add `MorphTargetNames` component, naming each indices of loaded morph
targets.
- Load morph targets weights and buffers in `bevy_gltf`
- handle morph targets animations in `bevy_animation` (previously, it
was a `warn!` log)
- Add the `MorphStressTest.gltf` asset for morph targets testing, taken
from the glTF samples repo, CC0.
- Add morph target manipulation to `scene_viewer`
- Separate the animation code in `scene_viewer` from the rest of the
code, reducing `#[cfg(feature)]` noise
- Add the `morph_targets.rs` example to show off how to manipulate morph
targets, loading `MorpStressTest.gltf`
## Migration Guide
- (very specialized, unlikely to be touched by 3rd parties)
- `MeshPipeline` now has a single `mesh_layouts` field rather than
separate `mesh_layout` and `skinned_mesh_layout` fields. You should
handle all possible mesh bind group layouts in your implementation
- You should also handle properly the new `MORPH_TARGETS` shader def and
mesh pipeline key. A new function is exposed to make this easier:
`setup_moprh_and_skinning_defs`
- The `MeshBindGroup` is now `MeshBindGroups`, cached bind groups are
now accessed through the `get` method.
[1]: https://en.wikipedia.org/wiki/Morph_target_animation
[2]:
https://registry.khronos.org/glTF/specs/2.0/glTF-2.0.html#morph-targets
---------
Co-authored-by: François <mockersf@gmail.com>
Co-authored-by: Carter Anderson <mcanders1@gmail.com>
2023-06-22 20:00:01 +00:00
|
|
|
use bevy_utils::{tracing::error, HashMap, Hashed};
|
Reorder render sets, refactor bevy_sprite to take advantage (#9236)
This is a continuation of this PR: #8062
# Objective
- Reorder render schedule sets to allow data preparation when phase item
order is known to support improved batching
- Part of the batching/instancing etc plan from here:
https://github.com/bevyengine/bevy/issues/89#issuecomment-1379249074
- The original idea came from @inodentry and proved to be a good one.
Thanks!
- Refactor `bevy_sprite` and `bevy_ui` to take advantage of the new
ordering
## Solution
- Move `Prepare` and `PrepareFlush` after `PhaseSortFlush`
- Add a `PrepareAssets` set that runs in parallel with other systems and
sets in the render schedule.
- Put prepare_assets systems in the `PrepareAssets` set
- If explicit dependencies are needed on Mesh or Material RenderAssets
then depend on the appropriate system.
- Add `ManageViews` and `ManageViewsFlush` sets between
`ExtractCommands` and Queue
- Move `queue_mesh*_bind_group` to the Prepare stage
- Rename them to `prepare_`
- Put systems that prepare resources (buffers, textures, etc.) into a
`PrepareResources` set inside `Prepare`
- Put the `prepare_..._bind_group` systems into a `PrepareBindGroup` set
after `PrepareResources`
- Move `prepare_lights` to the `ManageViews` set
- `prepare_lights` creates views and this must happen before `Queue`
- This system needs refactoring to stop handling all responsibilities
- Gather lights, sort, and create shadow map views. Store sorted light
entities in a resource
- Remove `BatchedPhaseItem`
- Replace `batch_range` with `batch_size` representing how many items to
skip after rendering the item or to skip the item entirely if
`batch_size` is 0.
- `queue_sprites` has been split into `queue_sprites` for queueing phase
items and `prepare_sprites` for batching after the `PhaseSort`
- `PhaseItem`s are still inserted in `queue_sprites`
- After sorting adjacent compatible sprite phase items are accumulated
into `SpriteBatch` components on the first entity of each batch,
containing a range of vertex indices. The associated `PhaseItem`'s
`batch_size` is updated appropriately.
- `SpriteBatch` items are then drawn skipping over the other items in
the batch based on the value in `batch_size`
- A very similar refactor was performed on `bevy_ui`
---
## Changelog
Changed:
- Reordered and reworked render app schedule sets. The main change is
that data is extracted, queued, sorted, and then prepared when the order
of data is known.
- Refactor `bevy_sprite` and `bevy_ui` to take advantage of the
reordering.
## Migration Guide
- Assets such as materials and meshes should now be created in
`PrepareAssets` e.g. `prepare_assets<Mesh>`
- Queueing entities to `RenderPhase`s continues to be done in `Queue`
e.g. `queue_sprites`
- Preparing resources (textures, buffers, etc.) should now be done in
`PrepareResources`, e.g. `prepare_prepass_textures`,
`prepare_mesh_uniforms`
- Prepare bind groups should now be done in `PrepareBindGroups` e.g.
`prepare_mesh_bind_group`
- Any batching or instancing can now be done in `Prepare` where the
order of the phase items is known e.g. `prepare_sprites`
## Next Steps
- Introduce some generic mechanism to ensure items that can be batched
are grouped in the phase item order, currently you could easily have
`[sprite at z 0, mesh at z 0, sprite at z 0]` preventing batching.
- Investigate improved orderings for building the MeshUniform buffer
- Implementing batching across the rest of bevy
---------
Co-authored-by: Robert Swain <robert.swain@gmail.com>
Co-authored-by: robtfm <50659922+robtfm@users.noreply.github.com>
2023-08-27 14:33:49 +00:00
|
|
|
use fixedbitset::FixedBitSet;
|
Add morph targets (#8158)
# Objective
- Add morph targets to `bevy_pbr` (closes #5756) & load them from glTF
- Supersedes #3722
- Fixes #6814
[Morph targets][1] (also known as shape interpolation, shape keys, or
blend shapes) allow animating individual vertices with fine grained
controls. This is typically used for facial expressions. By specifying
multiple poses as vertex offset, and providing a set of weight of each
pose, it is possible to define surprisingly realistic transitions
between poses. Blending between multiple poses also allow composition.
Morph targets are part of the [gltf standard][2] and are a feature of
Unity and Unreal, and babylone.js, it is only natural to implement them
in bevy.
## Solution
This implementation of morph targets uses a 3d texture where each pixel
is a component of an animated attribute. Each layer is a different
target. We use a 2d texture for each target, because the number of
attribute×components×animated vertices is expected to always exceed the
maximum pixel row size limit of webGL2. It copies fairly closely the way
skinning is implemented on the CPU side, while on the GPU side, the
shader morph target implementation is a relatively trivial detail.
We add an optional `morph_texture` to the `Mesh` struct. The
`morph_texture` is built through a method that accepts an iterator over
attribute buffers.
The `MorphWeights` component, user-accessible, controls the blend of
poses used by mesh instances (so that multiple copy of the same mesh may
have different weights), all the weights are uploaded to a uniform
buffer of 256 `f32`. We limit to 16 poses per mesh, and a total of 256
poses.
More literature:
* Old babylone.js implementation (vertex attribute-based):
https://www.eternalcoding.com/dev-log-1-morph-targets/
* Babylone.js implementation (similar to ours):
https://www.youtube.com/watch?v=LBPRmGgU0PE
* GPU gems 3:
https://developer.nvidia.com/gpugems/gpugems3/part-i-geometry/chapter-3-directx-10-blend-shapes-breaking-limits
* Development discord thread
https://discord.com/channels/691052431525675048/1083325980615114772
https://user-images.githubusercontent.com/26321040/231181046-3bca2ab2-d4d9-472e-8098-639f1871ce2e.mp4
https://github.com/bevyengine/bevy/assets/26321040/d2a0c544-0ef8-45cf-9f99-8c3792f5a258
## Acknowledgements
* Thanks to `storytold` for sponsoring the feature
* Thanks to `superdump` and `james7132` for guidance and help figuring
out stuff
## Future work
- Handling of less and more attributes (eg: animated uv, animated
arbitrary attributes)
- Dynamic pose allocation (so that zero-weighted poses aren't uploaded
to GPU for example, enables much more total poses)
- Better animation API, see #8357
----
## Changelog
- Add morph targets to bevy meshes
- Support up to 64 poses per mesh of individually up to 116508 vertices,
animation currently strictly limited to the position, normal and tangent
attributes.
- Load a morph target using `Mesh::set_morph_targets`
- Add `VisitMorphTargets` and `VisitMorphAttributes` traits to
`bevy_render`, this allows defining morph targets (a fairly complex and
nested data structure) through iterators (ie: single copy instead of
passing around buffers), see documentation of those traits for details
- Add `MorphWeights` component exported by `bevy_render`
- `MorphWeights` control mesh's morph target weights, blending between
various poses defined as morph targets.
- `MorphWeights` are directly inherited by direct children (single level
of hierarchy) of an entity. This allows controlling several mesh
primitives through a unique entity _as per GLTF spec_.
- Add `MorphTargetNames` component, naming each indices of loaded morph
targets.
- Load morph targets weights and buffers in `bevy_gltf`
- handle morph targets animations in `bevy_animation` (previously, it
was a `warn!` log)
- Add the `MorphStressTest.gltf` asset for morph targets testing, taken
from the glTF samples repo, CC0.
- Add morph target manipulation to `scene_viewer`
- Separate the animation code in `scene_viewer` from the rest of the
code, reducing `#[cfg(feature)]` noise
- Add the `morph_targets.rs` example to show off how to manipulate morph
targets, loading `MorpStressTest.gltf`
## Migration Guide
- (very specialized, unlikely to be touched by 3rd parties)
- `MeshPipeline` now has a single `mesh_layouts` field rather than
separate `mesh_layout` and `skinned_mesh_layout` fields. You should
handle all possible mesh bind group layouts in your implementation
- You should also handle properly the new `MORPH_TARGETS` shader def and
mesh pipeline key. A new function is exposed to make this easier:
`setup_moprh_and_skinning_defs`
- The `MeshBindGroup` is now `MeshBindGroups`, cached bind groups are
now accessed through the `get` method.
[1]: https://en.wikipedia.org/wiki/Morph_target_animation
[2]:
https://registry.khronos.org/glTF/specs/2.0/glTF-2.0.html#morph-targets
---------
Co-authored-by: François <mockersf@gmail.com>
Co-authored-by: Carter Anderson <mcanders1@gmail.com>
2023-06-22 20:00:01 +00:00
|
|
|
|
|
|
|
use crate::render::{
|
|
|
|
morph::{extract_morphs, prepare_morphs, MorphIndex, MorphUniform},
|
|
|
|
MeshLayouts,
|
|
|
|
};
|
Shader Imports. Decouple Mesh logic from PBR (#3137)
## Shader Imports
This adds "whole file" shader imports. These come in two flavors:
### Asset Path Imports
```rust
// /assets/shaders/custom.wgsl
#import "shaders/custom_material.wgsl"
[[stage(fragment)]]
fn fragment() -> [[location(0)]] vec4<f32> {
return get_color();
}
```
```rust
// /assets/shaders/custom_material.wgsl
[[block]]
struct CustomMaterial {
color: vec4<f32>;
};
[[group(1), binding(0)]]
var<uniform> material: CustomMaterial;
```
### Custom Path Imports
Enables defining custom import paths. These are intended to be used by crates to export shader functionality:
```rust
// bevy_pbr2/src/render/pbr.wgsl
#import bevy_pbr::mesh_view_bind_group
#import bevy_pbr::mesh_bind_group
[[block]]
struct StandardMaterial {
base_color: vec4<f32>;
emissive: vec4<f32>;
perceptual_roughness: f32;
metallic: f32;
reflectance: f32;
flags: u32;
};
/* rest of PBR fragment shader here */
```
```rust
impl Plugin for MeshRenderPlugin {
fn build(&self, app: &mut bevy_app::App) {
let mut shaders = app.world.get_resource_mut::<Assets<Shader>>().unwrap();
shaders.set_untracked(
MESH_BIND_GROUP_HANDLE,
Shader::from_wgsl(include_str!("mesh_bind_group.wgsl"))
.with_import_path("bevy_pbr::mesh_bind_group"),
);
shaders.set_untracked(
MESH_VIEW_BIND_GROUP_HANDLE,
Shader::from_wgsl(include_str!("mesh_view_bind_group.wgsl"))
.with_import_path("bevy_pbr::mesh_view_bind_group"),
);
```
By convention these should use rust-style module paths that start with the crate name. Ultimately we might enforce this convention.
Note that this feature implements _run time_ import resolution. Ultimately we should move the import logic into an asset preprocessor once Bevy gets support for that.
## Decouple Mesh Logic from PBR Logic via MeshRenderPlugin
This breaks out mesh rendering code from PBR material code, which improves the legibility of the code, decouples mesh logic from PBR logic, and opens the door for a future `MaterialPlugin<T: Material>` that handles all of the pipeline setup for arbitrary shader materials.
## Removed `RenderAsset<Shader>` in favor of extracting shaders into RenderPipelineCache
This simplifies the shader import implementation and removes the need to pass around `RenderAssets<Shader>`.
## RenderCommands are now fallible
This allows us to cleanly handle pipelines+shaders not being ready yet. We can abort a render command early in these cases, preventing bevy from trying to bind group / do draw calls for pipelines that couldn't be bound. This could also be used in the future for things like "components not existing on entities yet".
# Next Steps
* Investigate using Naga for "partial typed imports" (ex: `#import bevy_pbr::material::StandardMaterial`, which would import only the StandardMaterial struct)
* Implement `MaterialPlugin<T: Material>` for low-boilerplate custom material shaders
* Move shader import logic into the asset preprocessor once bevy gets support for that.
Fixes #3132
2021-11-18 03:45:02 +00:00
|
|
|
|
|
|
|
#[derive(Default)]
|
|
|
|
pub struct MeshRenderPlugin;
|
|
|
|
|
2023-08-28 16:58:45 +00:00
|
|
|
/// Maximum number of joints supported for skinned meshes.
|
|
|
|
pub const MAX_JOINTS: usize = 256;
|
2022-03-29 18:31:13 +00:00
|
|
|
const JOINT_SIZE: usize = std::mem::size_of::<Mat4>();
|
|
|
|
pub(crate) const JOINT_BUFFER_SIZE: usize = MAX_JOINTS * JOINT_SIZE;
|
|
|
|
|
2022-07-14 21:17:16 +00:00
|
|
|
pub const MESH_VERTEX_OUTPUT: HandleUntyped =
|
|
|
|
HandleUntyped::weak_from_u64(Shader::TYPE_UUID, 2645551199423808407);
|
Split mesh shader files (#4867)
# Objective
- Split PBR and 2D mesh shaders into types and bindings to prepare the shaders to be more reusable.
- See #3969 for details. I'm doing this in multiple steps to make review easier.
---
## Changelog
- Changed: 2D and PBR mesh shaders are now split into types and bindings, the following shader imports are available: `bevy_pbr::mesh_view_types`, `bevy_pbr::mesh_view_bindings`, `bevy_pbr::mesh_types`, `bevy_pbr::mesh_bindings`, `bevy_sprite::mesh2d_view_types`, `bevy_sprite::mesh2d_view_bindings`, `bevy_sprite::mesh2d_types`, `bevy_sprite::mesh2d_bindings`
## Migration Guide
- In shaders for 3D meshes:
- `#import bevy_pbr::mesh_view_bind_group` -> `#import bevy_pbr::mesh_view_bindings`
- `#import bevy_pbr::mesh_struct` -> `#import bevy_pbr::mesh_types`
- NOTE: If you are using the mesh bind group at bind group index 2, you can remove those binding statements in your shader and just use `#import bevy_pbr::mesh_bindings` which itself imports the mesh types needed for the bindings.
- In shaders for 2D meshes:
- `#import bevy_sprite::mesh2d_view_bind_group` -> `#import bevy_sprite::mesh2d_view_bindings`
- `#import bevy_sprite::mesh2d_struct` -> `#import bevy_sprite::mesh2d_types`
- NOTE: If you are using the mesh2d bind group at bind group index 2, you can remove those binding statements in your shader and just use `#import bevy_sprite::mesh2d_bindings` which itself imports the mesh2d types needed for the bindings.
2022-05-31 23:23:25 +00:00
|
|
|
pub const MESH_VIEW_TYPES_HANDLE: HandleUntyped =
|
|
|
|
HandleUntyped::weak_from_u64(Shader::TYPE_UUID, 8140454348013264787);
|
|
|
|
pub const MESH_VIEW_BINDINGS_HANDLE: HandleUntyped =
|
Shader Imports. Decouple Mesh logic from PBR (#3137)
## Shader Imports
This adds "whole file" shader imports. These come in two flavors:
### Asset Path Imports
```rust
// /assets/shaders/custom.wgsl
#import "shaders/custom_material.wgsl"
[[stage(fragment)]]
fn fragment() -> [[location(0)]] vec4<f32> {
return get_color();
}
```
```rust
// /assets/shaders/custom_material.wgsl
[[block]]
struct CustomMaterial {
color: vec4<f32>;
};
[[group(1), binding(0)]]
var<uniform> material: CustomMaterial;
```
### Custom Path Imports
Enables defining custom import paths. These are intended to be used by crates to export shader functionality:
```rust
// bevy_pbr2/src/render/pbr.wgsl
#import bevy_pbr::mesh_view_bind_group
#import bevy_pbr::mesh_bind_group
[[block]]
struct StandardMaterial {
base_color: vec4<f32>;
emissive: vec4<f32>;
perceptual_roughness: f32;
metallic: f32;
reflectance: f32;
flags: u32;
};
/* rest of PBR fragment shader here */
```
```rust
impl Plugin for MeshRenderPlugin {
fn build(&self, app: &mut bevy_app::App) {
let mut shaders = app.world.get_resource_mut::<Assets<Shader>>().unwrap();
shaders.set_untracked(
MESH_BIND_GROUP_HANDLE,
Shader::from_wgsl(include_str!("mesh_bind_group.wgsl"))
.with_import_path("bevy_pbr::mesh_bind_group"),
);
shaders.set_untracked(
MESH_VIEW_BIND_GROUP_HANDLE,
Shader::from_wgsl(include_str!("mesh_view_bind_group.wgsl"))
.with_import_path("bevy_pbr::mesh_view_bind_group"),
);
```
By convention these should use rust-style module paths that start with the crate name. Ultimately we might enforce this convention.
Note that this feature implements _run time_ import resolution. Ultimately we should move the import logic into an asset preprocessor once Bevy gets support for that.
## Decouple Mesh Logic from PBR Logic via MeshRenderPlugin
This breaks out mesh rendering code from PBR material code, which improves the legibility of the code, decouples mesh logic from PBR logic, and opens the door for a future `MaterialPlugin<T: Material>` that handles all of the pipeline setup for arbitrary shader materials.
## Removed `RenderAsset<Shader>` in favor of extracting shaders into RenderPipelineCache
This simplifies the shader import implementation and removes the need to pass around `RenderAssets<Shader>`.
## RenderCommands are now fallible
This allows us to cleanly handle pipelines+shaders not being ready yet. We can abort a render command early in these cases, preventing bevy from trying to bind group / do draw calls for pipelines that couldn't be bound. This could also be used in the future for things like "components not existing on entities yet".
# Next Steps
* Investigate using Naga for "partial typed imports" (ex: `#import bevy_pbr::material::StandardMaterial`, which would import only the StandardMaterial struct)
* Implement `MaterialPlugin<T: Material>` for low-boilerplate custom material shaders
* Move shader import logic into the asset preprocessor once bevy gets support for that.
Fixes #3132
2021-11-18 03:45:02 +00:00
|
|
|
HandleUntyped::weak_from_u64(Shader::TYPE_UUID, 9076678235888822571);
|
Split mesh shader files (#4867)
# Objective
- Split PBR and 2D mesh shaders into types and bindings to prepare the shaders to be more reusable.
- See #3969 for details. I'm doing this in multiple steps to make review easier.
---
## Changelog
- Changed: 2D and PBR mesh shaders are now split into types and bindings, the following shader imports are available: `bevy_pbr::mesh_view_types`, `bevy_pbr::mesh_view_bindings`, `bevy_pbr::mesh_types`, `bevy_pbr::mesh_bindings`, `bevy_sprite::mesh2d_view_types`, `bevy_sprite::mesh2d_view_bindings`, `bevy_sprite::mesh2d_types`, `bevy_sprite::mesh2d_bindings`
## Migration Guide
- In shaders for 3D meshes:
- `#import bevy_pbr::mesh_view_bind_group` -> `#import bevy_pbr::mesh_view_bindings`
- `#import bevy_pbr::mesh_struct` -> `#import bevy_pbr::mesh_types`
- NOTE: If you are using the mesh bind group at bind group index 2, you can remove those binding statements in your shader and just use `#import bevy_pbr::mesh_bindings` which itself imports the mesh types needed for the bindings.
- In shaders for 2D meshes:
- `#import bevy_sprite::mesh2d_view_bind_group` -> `#import bevy_sprite::mesh2d_view_bindings`
- `#import bevy_sprite::mesh2d_struct` -> `#import bevy_sprite::mesh2d_types`
- NOTE: If you are using the mesh2d bind group at bind group index 2, you can remove those binding statements in your shader and just use `#import bevy_sprite::mesh2d_bindings` which itself imports the mesh2d types needed for the bindings.
2022-05-31 23:23:25 +00:00
|
|
|
pub const MESH_TYPES_HANDLE: HandleUntyped =
|
Shader Imports. Decouple Mesh logic from PBR (#3137)
## Shader Imports
This adds "whole file" shader imports. These come in two flavors:
### Asset Path Imports
```rust
// /assets/shaders/custom.wgsl
#import "shaders/custom_material.wgsl"
[[stage(fragment)]]
fn fragment() -> [[location(0)]] vec4<f32> {
return get_color();
}
```
```rust
// /assets/shaders/custom_material.wgsl
[[block]]
struct CustomMaterial {
color: vec4<f32>;
};
[[group(1), binding(0)]]
var<uniform> material: CustomMaterial;
```
### Custom Path Imports
Enables defining custom import paths. These are intended to be used by crates to export shader functionality:
```rust
// bevy_pbr2/src/render/pbr.wgsl
#import bevy_pbr::mesh_view_bind_group
#import bevy_pbr::mesh_bind_group
[[block]]
struct StandardMaterial {
base_color: vec4<f32>;
emissive: vec4<f32>;
perceptual_roughness: f32;
metallic: f32;
reflectance: f32;
flags: u32;
};
/* rest of PBR fragment shader here */
```
```rust
impl Plugin for MeshRenderPlugin {
fn build(&self, app: &mut bevy_app::App) {
let mut shaders = app.world.get_resource_mut::<Assets<Shader>>().unwrap();
shaders.set_untracked(
MESH_BIND_GROUP_HANDLE,
Shader::from_wgsl(include_str!("mesh_bind_group.wgsl"))
.with_import_path("bevy_pbr::mesh_bind_group"),
);
shaders.set_untracked(
MESH_VIEW_BIND_GROUP_HANDLE,
Shader::from_wgsl(include_str!("mesh_view_bind_group.wgsl"))
.with_import_path("bevy_pbr::mesh_view_bind_group"),
);
```
By convention these should use rust-style module paths that start with the crate name. Ultimately we might enforce this convention.
Note that this feature implements _run time_ import resolution. Ultimately we should move the import logic into an asset preprocessor once Bevy gets support for that.
## Decouple Mesh Logic from PBR Logic via MeshRenderPlugin
This breaks out mesh rendering code from PBR material code, which improves the legibility of the code, decouples mesh logic from PBR logic, and opens the door for a future `MaterialPlugin<T: Material>` that handles all of the pipeline setup for arbitrary shader materials.
## Removed `RenderAsset<Shader>` in favor of extracting shaders into RenderPipelineCache
This simplifies the shader import implementation and removes the need to pass around `RenderAssets<Shader>`.
## RenderCommands are now fallible
This allows us to cleanly handle pipelines+shaders not being ready yet. We can abort a render command early in these cases, preventing bevy from trying to bind group / do draw calls for pipelines that couldn't be bound. This could also be used in the future for things like "components not existing on entities yet".
# Next Steps
* Investigate using Naga for "partial typed imports" (ex: `#import bevy_pbr::material::StandardMaterial`, which would import only the StandardMaterial struct)
* Implement `MaterialPlugin<T: Material>` for low-boilerplate custom material shaders
* Move shader import logic into the asset preprocessor once bevy gets support for that.
Fixes #3132
2021-11-18 03:45:02 +00:00
|
|
|
HandleUntyped::weak_from_u64(Shader::TYPE_UUID, 2506024101911992377);
|
Split mesh shader files (#4867)
# Objective
- Split PBR and 2D mesh shaders into types and bindings to prepare the shaders to be more reusable.
- See #3969 for details. I'm doing this in multiple steps to make review easier.
---
## Changelog
- Changed: 2D and PBR mesh shaders are now split into types and bindings, the following shader imports are available: `bevy_pbr::mesh_view_types`, `bevy_pbr::mesh_view_bindings`, `bevy_pbr::mesh_types`, `bevy_pbr::mesh_bindings`, `bevy_sprite::mesh2d_view_types`, `bevy_sprite::mesh2d_view_bindings`, `bevy_sprite::mesh2d_types`, `bevy_sprite::mesh2d_bindings`
## Migration Guide
- In shaders for 3D meshes:
- `#import bevy_pbr::mesh_view_bind_group` -> `#import bevy_pbr::mesh_view_bindings`
- `#import bevy_pbr::mesh_struct` -> `#import bevy_pbr::mesh_types`
- NOTE: If you are using the mesh bind group at bind group index 2, you can remove those binding statements in your shader and just use `#import bevy_pbr::mesh_bindings` which itself imports the mesh types needed for the bindings.
- In shaders for 2D meshes:
- `#import bevy_sprite::mesh2d_view_bind_group` -> `#import bevy_sprite::mesh2d_view_bindings`
- `#import bevy_sprite::mesh2d_struct` -> `#import bevy_sprite::mesh2d_types`
- NOTE: If you are using the mesh2d bind group at bind group index 2, you can remove those binding statements in your shader and just use `#import bevy_sprite::mesh2d_bindings` which itself imports the mesh2d types needed for the bindings.
2022-05-31 23:23:25 +00:00
|
|
|
pub const MESH_BINDINGS_HANDLE: HandleUntyped =
|
|
|
|
HandleUntyped::weak_from_u64(Shader::TYPE_UUID, 16831548636314682308);
|
2022-06-14 00:32:33 +00:00
|
|
|
pub const MESH_FUNCTIONS_HANDLE: HandleUntyped =
|
|
|
|
HandleUntyped::weak_from_u64(Shader::TYPE_UUID, 6300874327833745635);
|
Shader Imports. Decouple Mesh logic from PBR (#3137)
## Shader Imports
This adds "whole file" shader imports. These come in two flavors:
### Asset Path Imports
```rust
// /assets/shaders/custom.wgsl
#import "shaders/custom_material.wgsl"
[[stage(fragment)]]
fn fragment() -> [[location(0)]] vec4<f32> {
return get_color();
}
```
```rust
// /assets/shaders/custom_material.wgsl
[[block]]
struct CustomMaterial {
color: vec4<f32>;
};
[[group(1), binding(0)]]
var<uniform> material: CustomMaterial;
```
### Custom Path Imports
Enables defining custom import paths. These are intended to be used by crates to export shader functionality:
```rust
// bevy_pbr2/src/render/pbr.wgsl
#import bevy_pbr::mesh_view_bind_group
#import bevy_pbr::mesh_bind_group
[[block]]
struct StandardMaterial {
base_color: vec4<f32>;
emissive: vec4<f32>;
perceptual_roughness: f32;
metallic: f32;
reflectance: f32;
flags: u32;
};
/* rest of PBR fragment shader here */
```
```rust
impl Plugin for MeshRenderPlugin {
fn build(&self, app: &mut bevy_app::App) {
let mut shaders = app.world.get_resource_mut::<Assets<Shader>>().unwrap();
shaders.set_untracked(
MESH_BIND_GROUP_HANDLE,
Shader::from_wgsl(include_str!("mesh_bind_group.wgsl"))
.with_import_path("bevy_pbr::mesh_bind_group"),
);
shaders.set_untracked(
MESH_VIEW_BIND_GROUP_HANDLE,
Shader::from_wgsl(include_str!("mesh_view_bind_group.wgsl"))
.with_import_path("bevy_pbr::mesh_view_bind_group"),
);
```
By convention these should use rust-style module paths that start with the crate name. Ultimately we might enforce this convention.
Note that this feature implements _run time_ import resolution. Ultimately we should move the import logic into an asset preprocessor once Bevy gets support for that.
## Decouple Mesh Logic from PBR Logic via MeshRenderPlugin
This breaks out mesh rendering code from PBR material code, which improves the legibility of the code, decouples mesh logic from PBR logic, and opens the door for a future `MaterialPlugin<T: Material>` that handles all of the pipeline setup for arbitrary shader materials.
## Removed `RenderAsset<Shader>` in favor of extracting shaders into RenderPipelineCache
This simplifies the shader import implementation and removes the need to pass around `RenderAssets<Shader>`.
## RenderCommands are now fallible
This allows us to cleanly handle pipelines+shaders not being ready yet. We can abort a render command early in these cases, preventing bevy from trying to bind group / do draw calls for pipelines that couldn't be bound. This could also be used in the future for things like "components not existing on entities yet".
# Next Steps
* Investigate using Naga for "partial typed imports" (ex: `#import bevy_pbr::material::StandardMaterial`, which would import only the StandardMaterial struct)
* Implement `MaterialPlugin<T: Material>` for low-boilerplate custom material shaders
* Move shader import logic into the asset preprocessor once bevy gets support for that.
Fixes #3132
2021-11-18 03:45:02 +00:00
|
|
|
pub const MESH_SHADER_HANDLE: HandleUntyped =
|
|
|
|
HandleUntyped::weak_from_u64(Shader::TYPE_UUID, 3252377289100772450);
|
2022-03-29 18:31:13 +00:00
|
|
|
pub const SKINNING_HANDLE: HandleUntyped =
|
|
|
|
HandleUntyped::weak_from_u64(Shader::TYPE_UUID, 13215291596265391738);
|
Add morph targets (#8158)
# Objective
- Add morph targets to `bevy_pbr` (closes #5756) & load them from glTF
- Supersedes #3722
- Fixes #6814
[Morph targets][1] (also known as shape interpolation, shape keys, or
blend shapes) allow animating individual vertices with fine grained
controls. This is typically used for facial expressions. By specifying
multiple poses as vertex offset, and providing a set of weight of each
pose, it is possible to define surprisingly realistic transitions
between poses. Blending between multiple poses also allow composition.
Morph targets are part of the [gltf standard][2] and are a feature of
Unity and Unreal, and babylone.js, it is only natural to implement them
in bevy.
## Solution
This implementation of morph targets uses a 3d texture where each pixel
is a component of an animated attribute. Each layer is a different
target. We use a 2d texture for each target, because the number of
attribute×components×animated vertices is expected to always exceed the
maximum pixel row size limit of webGL2. It copies fairly closely the way
skinning is implemented on the CPU side, while on the GPU side, the
shader morph target implementation is a relatively trivial detail.
We add an optional `morph_texture` to the `Mesh` struct. The
`morph_texture` is built through a method that accepts an iterator over
attribute buffers.
The `MorphWeights` component, user-accessible, controls the blend of
poses used by mesh instances (so that multiple copy of the same mesh may
have different weights), all the weights are uploaded to a uniform
buffer of 256 `f32`. We limit to 16 poses per mesh, and a total of 256
poses.
More literature:
* Old babylone.js implementation (vertex attribute-based):
https://www.eternalcoding.com/dev-log-1-morph-targets/
* Babylone.js implementation (similar to ours):
https://www.youtube.com/watch?v=LBPRmGgU0PE
* GPU gems 3:
https://developer.nvidia.com/gpugems/gpugems3/part-i-geometry/chapter-3-directx-10-blend-shapes-breaking-limits
* Development discord thread
https://discord.com/channels/691052431525675048/1083325980615114772
https://user-images.githubusercontent.com/26321040/231181046-3bca2ab2-d4d9-472e-8098-639f1871ce2e.mp4
https://github.com/bevyengine/bevy/assets/26321040/d2a0c544-0ef8-45cf-9f99-8c3792f5a258
## Acknowledgements
* Thanks to `storytold` for sponsoring the feature
* Thanks to `superdump` and `james7132` for guidance and help figuring
out stuff
## Future work
- Handling of less and more attributes (eg: animated uv, animated
arbitrary attributes)
- Dynamic pose allocation (so that zero-weighted poses aren't uploaded
to GPU for example, enables much more total poses)
- Better animation API, see #8357
----
## Changelog
- Add morph targets to bevy meshes
- Support up to 64 poses per mesh of individually up to 116508 vertices,
animation currently strictly limited to the position, normal and tangent
attributes.
- Load a morph target using `Mesh::set_morph_targets`
- Add `VisitMorphTargets` and `VisitMorphAttributes` traits to
`bevy_render`, this allows defining morph targets (a fairly complex and
nested data structure) through iterators (ie: single copy instead of
passing around buffers), see documentation of those traits for details
- Add `MorphWeights` component exported by `bevy_render`
- `MorphWeights` control mesh's morph target weights, blending between
various poses defined as morph targets.
- `MorphWeights` are directly inherited by direct children (single level
of hierarchy) of an entity. This allows controlling several mesh
primitives through a unique entity _as per GLTF spec_.
- Add `MorphTargetNames` component, naming each indices of loaded morph
targets.
- Load morph targets weights and buffers in `bevy_gltf`
- handle morph targets animations in `bevy_animation` (previously, it
was a `warn!` log)
- Add the `MorphStressTest.gltf` asset for morph targets testing, taken
from the glTF samples repo, CC0.
- Add morph target manipulation to `scene_viewer`
- Separate the animation code in `scene_viewer` from the rest of the
code, reducing `#[cfg(feature)]` noise
- Add the `morph_targets.rs` example to show off how to manipulate morph
targets, loading `MorpStressTest.gltf`
## Migration Guide
- (very specialized, unlikely to be touched by 3rd parties)
- `MeshPipeline` now has a single `mesh_layouts` field rather than
separate `mesh_layout` and `skinned_mesh_layout` fields. You should
handle all possible mesh bind group layouts in your implementation
- You should also handle properly the new `MORPH_TARGETS` shader def and
mesh pipeline key. A new function is exposed to make this easier:
`setup_moprh_and_skinning_defs`
- The `MeshBindGroup` is now `MeshBindGroups`, cached bind groups are
now accessed through the `get` method.
[1]: https://en.wikipedia.org/wiki/Morph_target_animation
[2]:
https://registry.khronos.org/glTF/specs/2.0/glTF-2.0.html#morph-targets
---------
Co-authored-by: François <mockersf@gmail.com>
Co-authored-by: Carter Anderson <mcanders1@gmail.com>
2023-06-22 20:00:01 +00:00
|
|
|
pub const MORPH_HANDLE: HandleUntyped =
|
|
|
|
HandleUntyped::weak_from_u64(Shader::TYPE_UUID, 970982813587607345);
|
Shader Imports. Decouple Mesh logic from PBR (#3137)
## Shader Imports
This adds "whole file" shader imports. These come in two flavors:
### Asset Path Imports
```rust
// /assets/shaders/custom.wgsl
#import "shaders/custom_material.wgsl"
[[stage(fragment)]]
fn fragment() -> [[location(0)]] vec4<f32> {
return get_color();
}
```
```rust
// /assets/shaders/custom_material.wgsl
[[block]]
struct CustomMaterial {
color: vec4<f32>;
};
[[group(1), binding(0)]]
var<uniform> material: CustomMaterial;
```
### Custom Path Imports
Enables defining custom import paths. These are intended to be used by crates to export shader functionality:
```rust
// bevy_pbr2/src/render/pbr.wgsl
#import bevy_pbr::mesh_view_bind_group
#import bevy_pbr::mesh_bind_group
[[block]]
struct StandardMaterial {
base_color: vec4<f32>;
emissive: vec4<f32>;
perceptual_roughness: f32;
metallic: f32;
reflectance: f32;
flags: u32;
};
/* rest of PBR fragment shader here */
```
```rust
impl Plugin for MeshRenderPlugin {
fn build(&self, app: &mut bevy_app::App) {
let mut shaders = app.world.get_resource_mut::<Assets<Shader>>().unwrap();
shaders.set_untracked(
MESH_BIND_GROUP_HANDLE,
Shader::from_wgsl(include_str!("mesh_bind_group.wgsl"))
.with_import_path("bevy_pbr::mesh_bind_group"),
);
shaders.set_untracked(
MESH_VIEW_BIND_GROUP_HANDLE,
Shader::from_wgsl(include_str!("mesh_view_bind_group.wgsl"))
.with_import_path("bevy_pbr::mesh_view_bind_group"),
);
```
By convention these should use rust-style module paths that start with the crate name. Ultimately we might enforce this convention.
Note that this feature implements _run time_ import resolution. Ultimately we should move the import logic into an asset preprocessor once Bevy gets support for that.
## Decouple Mesh Logic from PBR Logic via MeshRenderPlugin
This breaks out mesh rendering code from PBR material code, which improves the legibility of the code, decouples mesh logic from PBR logic, and opens the door for a future `MaterialPlugin<T: Material>` that handles all of the pipeline setup for arbitrary shader materials.
## Removed `RenderAsset<Shader>` in favor of extracting shaders into RenderPipelineCache
This simplifies the shader import implementation and removes the need to pass around `RenderAssets<Shader>`.
## RenderCommands are now fallible
This allows us to cleanly handle pipelines+shaders not being ready yet. We can abort a render command early in these cases, preventing bevy from trying to bind group / do draw calls for pipelines that couldn't be bound. This could also be used in the future for things like "components not existing on entities yet".
# Next Steps
* Investigate using Naga for "partial typed imports" (ex: `#import bevy_pbr::material::StandardMaterial`, which would import only the StandardMaterial struct)
* Implement `MaterialPlugin<T: Material>` for low-boilerplate custom material shaders
* Move shader import logic into the asset preprocessor once bevy gets support for that.
Fixes #3132
2021-11-18 03:45:02 +00:00
|
|
|
|
|
|
|
impl Plugin for MeshRenderPlugin {
|
|
|
|
fn build(&self, app: &mut bevy_app::App) {
|
2022-07-14 21:17:16 +00:00
|
|
|
load_internal_asset!(
|
|
|
|
app,
|
|
|
|
MESH_VERTEX_OUTPUT,
|
|
|
|
"mesh_vertex_output.wgsl",
|
|
|
|
Shader::from_wgsl
|
|
|
|
);
|
2022-02-18 22:56:57 +00:00
|
|
|
load_internal_asset!(
|
|
|
|
app,
|
Split mesh shader files (#4867)
# Objective
- Split PBR and 2D mesh shaders into types and bindings to prepare the shaders to be more reusable.
- See #3969 for details. I'm doing this in multiple steps to make review easier.
---
## Changelog
- Changed: 2D and PBR mesh shaders are now split into types and bindings, the following shader imports are available: `bevy_pbr::mesh_view_types`, `bevy_pbr::mesh_view_bindings`, `bevy_pbr::mesh_types`, `bevy_pbr::mesh_bindings`, `bevy_sprite::mesh2d_view_types`, `bevy_sprite::mesh2d_view_bindings`, `bevy_sprite::mesh2d_types`, `bevy_sprite::mesh2d_bindings`
## Migration Guide
- In shaders for 3D meshes:
- `#import bevy_pbr::mesh_view_bind_group` -> `#import bevy_pbr::mesh_view_bindings`
- `#import bevy_pbr::mesh_struct` -> `#import bevy_pbr::mesh_types`
- NOTE: If you are using the mesh bind group at bind group index 2, you can remove those binding statements in your shader and just use `#import bevy_pbr::mesh_bindings` which itself imports the mesh types needed for the bindings.
- In shaders for 2D meshes:
- `#import bevy_sprite::mesh2d_view_bind_group` -> `#import bevy_sprite::mesh2d_view_bindings`
- `#import bevy_sprite::mesh2d_struct` -> `#import bevy_sprite::mesh2d_types`
- NOTE: If you are using the mesh2d bind group at bind group index 2, you can remove those binding statements in your shader and just use `#import bevy_sprite::mesh2d_bindings` which itself imports the mesh2d types needed for the bindings.
2022-05-31 23:23:25 +00:00
|
|
|
MESH_VIEW_TYPES_HANDLE,
|
|
|
|
"mesh_view_types.wgsl",
|
improve shader import model (#5703)
# Objective
operate on naga IR directly to improve handling of shader modules.
- give codespan reporting into imported modules
- allow glsl to be used from wgsl and vice-versa
the ultimate objective is to make it possible to
- provide user hooks for core shader functions (to modify light
behaviour within the standard pbr pipeline, for example)
- make automatic binding slot allocation possible
but ... since this is already big, adds some value and (i think) is at
feature parity with the existing code, i wanted to push this now.
## Solution
i made a crate called naga_oil (https://github.com/robtfm/naga_oil -
unpublished for now, could be part of bevy) which manages modules by
- building each module independantly to naga IR
- creating "header" files for each supported language, which are used to
build dependent modules/shaders
- make final shaders by combining the shader IR with the IR for imported
modules
then integrated this into bevy, replacing some of the existing shader
processing stuff. also reworked examples to reflect this.
## Migration Guide
shaders that don't use `#import` directives should work without changes.
the most notable user-facing difference is that imported
functions/variables/etc need to be qualified at point of use, and
there's no "leakage" of visible stuff into your shader scope from the
imports of your imports, so if you used things imported by your imports,
you now need to import them directly and qualify them.
the current strategy of including/'spreading' `mesh_vertex_output`
directly into a struct doesn't work any more, so these need to be
modified as per the examples (e.g. color_material.wgsl, or many others).
mesh data is assumed to be in bindgroup 2 by default, if mesh data is
bound into bindgroup 1 instead then the shader def `MESH_BINDGROUP_1`
needs to be added to the pipeline shader_defs.
2023-06-27 00:29:22 +00:00
|
|
|
Shader::from_wgsl_with_defs,
|
|
|
|
vec![
|
|
|
|
ShaderDefVal::UInt(
|
|
|
|
"MAX_DIRECTIONAL_LIGHTS".into(),
|
|
|
|
MAX_DIRECTIONAL_LIGHTS as u32
|
|
|
|
),
|
|
|
|
ShaderDefVal::UInt(
|
|
|
|
"MAX_CASCADES_PER_LIGHT".into(),
|
|
|
|
MAX_CASCADES_PER_LIGHT as u32,
|
|
|
|
)
|
|
|
|
]
|
Split mesh shader files (#4867)
# Objective
- Split PBR and 2D mesh shaders into types and bindings to prepare the shaders to be more reusable.
- See #3969 for details. I'm doing this in multiple steps to make review easier.
---
## Changelog
- Changed: 2D and PBR mesh shaders are now split into types and bindings, the following shader imports are available: `bevy_pbr::mesh_view_types`, `bevy_pbr::mesh_view_bindings`, `bevy_pbr::mesh_types`, `bevy_pbr::mesh_bindings`, `bevy_sprite::mesh2d_view_types`, `bevy_sprite::mesh2d_view_bindings`, `bevy_sprite::mesh2d_types`, `bevy_sprite::mesh2d_bindings`
## Migration Guide
- In shaders for 3D meshes:
- `#import bevy_pbr::mesh_view_bind_group` -> `#import bevy_pbr::mesh_view_bindings`
- `#import bevy_pbr::mesh_struct` -> `#import bevy_pbr::mesh_types`
- NOTE: If you are using the mesh bind group at bind group index 2, you can remove those binding statements in your shader and just use `#import bevy_pbr::mesh_bindings` which itself imports the mesh types needed for the bindings.
- In shaders for 2D meshes:
- `#import bevy_sprite::mesh2d_view_bind_group` -> `#import bevy_sprite::mesh2d_view_bindings`
- `#import bevy_sprite::mesh2d_struct` -> `#import bevy_sprite::mesh2d_types`
- NOTE: If you are using the mesh2d bind group at bind group index 2, you can remove those binding statements in your shader and just use `#import bevy_sprite::mesh2d_bindings` which itself imports the mesh2d types needed for the bindings.
2022-05-31 23:23:25 +00:00
|
|
|
);
|
|
|
|
load_internal_asset!(
|
|
|
|
app,
|
|
|
|
MESH_VIEW_BINDINGS_HANDLE,
|
|
|
|
"mesh_view_bindings.wgsl",
|
2022-02-18 22:56:57 +00:00
|
|
|
Shader::from_wgsl
|
Shader Imports. Decouple Mesh logic from PBR (#3137)
## Shader Imports
This adds "whole file" shader imports. These come in two flavors:
### Asset Path Imports
```rust
// /assets/shaders/custom.wgsl
#import "shaders/custom_material.wgsl"
[[stage(fragment)]]
fn fragment() -> [[location(0)]] vec4<f32> {
return get_color();
}
```
```rust
// /assets/shaders/custom_material.wgsl
[[block]]
struct CustomMaterial {
color: vec4<f32>;
};
[[group(1), binding(0)]]
var<uniform> material: CustomMaterial;
```
### Custom Path Imports
Enables defining custom import paths. These are intended to be used by crates to export shader functionality:
```rust
// bevy_pbr2/src/render/pbr.wgsl
#import bevy_pbr::mesh_view_bind_group
#import bevy_pbr::mesh_bind_group
[[block]]
struct StandardMaterial {
base_color: vec4<f32>;
emissive: vec4<f32>;
perceptual_roughness: f32;
metallic: f32;
reflectance: f32;
flags: u32;
};
/* rest of PBR fragment shader here */
```
```rust
impl Plugin for MeshRenderPlugin {
fn build(&self, app: &mut bevy_app::App) {
let mut shaders = app.world.get_resource_mut::<Assets<Shader>>().unwrap();
shaders.set_untracked(
MESH_BIND_GROUP_HANDLE,
Shader::from_wgsl(include_str!("mesh_bind_group.wgsl"))
.with_import_path("bevy_pbr::mesh_bind_group"),
);
shaders.set_untracked(
MESH_VIEW_BIND_GROUP_HANDLE,
Shader::from_wgsl(include_str!("mesh_view_bind_group.wgsl"))
.with_import_path("bevy_pbr::mesh_view_bind_group"),
);
```
By convention these should use rust-style module paths that start with the crate name. Ultimately we might enforce this convention.
Note that this feature implements _run time_ import resolution. Ultimately we should move the import logic into an asset preprocessor once Bevy gets support for that.
## Decouple Mesh Logic from PBR Logic via MeshRenderPlugin
This breaks out mesh rendering code from PBR material code, which improves the legibility of the code, decouples mesh logic from PBR logic, and opens the door for a future `MaterialPlugin<T: Material>` that handles all of the pipeline setup for arbitrary shader materials.
## Removed `RenderAsset<Shader>` in favor of extracting shaders into RenderPipelineCache
This simplifies the shader import implementation and removes the need to pass around `RenderAssets<Shader>`.
## RenderCommands are now fallible
This allows us to cleanly handle pipelines+shaders not being ready yet. We can abort a render command early in these cases, preventing bevy from trying to bind group / do draw calls for pipelines that couldn't be bound. This could also be used in the future for things like "components not existing on entities yet".
# Next Steps
* Investigate using Naga for "partial typed imports" (ex: `#import bevy_pbr::material::StandardMaterial`, which would import only the StandardMaterial struct)
* Implement `MaterialPlugin<T: Material>` for low-boilerplate custom material shaders
* Move shader import logic into the asset preprocessor once bevy gets support for that.
Fixes #3132
2021-11-18 03:45:02 +00:00
|
|
|
);
|
Split mesh shader files (#4867)
# Objective
- Split PBR and 2D mesh shaders into types and bindings to prepare the shaders to be more reusable.
- See #3969 for details. I'm doing this in multiple steps to make review easier.
---
## Changelog
- Changed: 2D and PBR mesh shaders are now split into types and bindings, the following shader imports are available: `bevy_pbr::mesh_view_types`, `bevy_pbr::mesh_view_bindings`, `bevy_pbr::mesh_types`, `bevy_pbr::mesh_bindings`, `bevy_sprite::mesh2d_view_types`, `bevy_sprite::mesh2d_view_bindings`, `bevy_sprite::mesh2d_types`, `bevy_sprite::mesh2d_bindings`
## Migration Guide
- In shaders for 3D meshes:
- `#import bevy_pbr::mesh_view_bind_group` -> `#import bevy_pbr::mesh_view_bindings`
- `#import bevy_pbr::mesh_struct` -> `#import bevy_pbr::mesh_types`
- NOTE: If you are using the mesh bind group at bind group index 2, you can remove those binding statements in your shader and just use `#import bevy_pbr::mesh_bindings` which itself imports the mesh types needed for the bindings.
- In shaders for 2D meshes:
- `#import bevy_sprite::mesh2d_view_bind_group` -> `#import bevy_sprite::mesh2d_view_bindings`
- `#import bevy_sprite::mesh2d_struct` -> `#import bevy_sprite::mesh2d_types`
- NOTE: If you are using the mesh2d bind group at bind group index 2, you can remove those binding statements in your shader and just use `#import bevy_sprite::mesh2d_bindings` which itself imports the mesh2d types needed for the bindings.
2022-05-31 23:23:25 +00:00
|
|
|
load_internal_asset!(app, MESH_TYPES_HANDLE, "mesh_types.wgsl", Shader::from_wgsl);
|
2022-06-14 00:32:33 +00:00
|
|
|
load_internal_asset!(
|
|
|
|
app,
|
|
|
|
MESH_FUNCTIONS_HANDLE,
|
|
|
|
"mesh_functions.wgsl",
|
|
|
|
Shader::from_wgsl
|
|
|
|
);
|
Split mesh shader files (#4867)
# Objective
- Split PBR and 2D mesh shaders into types and bindings to prepare the shaders to be more reusable.
- See #3969 for details. I'm doing this in multiple steps to make review easier.
---
## Changelog
- Changed: 2D and PBR mesh shaders are now split into types and bindings, the following shader imports are available: `bevy_pbr::mesh_view_types`, `bevy_pbr::mesh_view_bindings`, `bevy_pbr::mesh_types`, `bevy_pbr::mesh_bindings`, `bevy_sprite::mesh2d_view_types`, `bevy_sprite::mesh2d_view_bindings`, `bevy_sprite::mesh2d_types`, `bevy_sprite::mesh2d_bindings`
## Migration Guide
- In shaders for 3D meshes:
- `#import bevy_pbr::mesh_view_bind_group` -> `#import bevy_pbr::mesh_view_bindings`
- `#import bevy_pbr::mesh_struct` -> `#import bevy_pbr::mesh_types`
- NOTE: If you are using the mesh bind group at bind group index 2, you can remove those binding statements in your shader and just use `#import bevy_pbr::mesh_bindings` which itself imports the mesh types needed for the bindings.
- In shaders for 2D meshes:
- `#import bevy_sprite::mesh2d_view_bind_group` -> `#import bevy_sprite::mesh2d_view_bindings`
- `#import bevy_sprite::mesh2d_struct` -> `#import bevy_sprite::mesh2d_types`
- NOTE: If you are using the mesh2d bind group at bind group index 2, you can remove those binding statements in your shader and just use `#import bevy_sprite::mesh2d_bindings` which itself imports the mesh2d types needed for the bindings.
2022-05-31 23:23:25 +00:00
|
|
|
load_internal_asset!(app, MESH_SHADER_HANDLE, "mesh.wgsl", Shader::from_wgsl);
|
2022-03-29 18:31:13 +00:00
|
|
|
load_internal_asset!(app, SKINNING_HANDLE, "skinning.wgsl", Shader::from_wgsl);
|
Add morph targets (#8158)
# Objective
- Add morph targets to `bevy_pbr` (closes #5756) & load them from glTF
- Supersedes #3722
- Fixes #6814
[Morph targets][1] (also known as shape interpolation, shape keys, or
blend shapes) allow animating individual vertices with fine grained
controls. This is typically used for facial expressions. By specifying
multiple poses as vertex offset, and providing a set of weight of each
pose, it is possible to define surprisingly realistic transitions
between poses. Blending between multiple poses also allow composition.
Morph targets are part of the [gltf standard][2] and are a feature of
Unity and Unreal, and babylone.js, it is only natural to implement them
in bevy.
## Solution
This implementation of morph targets uses a 3d texture where each pixel
is a component of an animated attribute. Each layer is a different
target. We use a 2d texture for each target, because the number of
attribute×components×animated vertices is expected to always exceed the
maximum pixel row size limit of webGL2. It copies fairly closely the way
skinning is implemented on the CPU side, while on the GPU side, the
shader morph target implementation is a relatively trivial detail.
We add an optional `morph_texture` to the `Mesh` struct. The
`morph_texture` is built through a method that accepts an iterator over
attribute buffers.
The `MorphWeights` component, user-accessible, controls the blend of
poses used by mesh instances (so that multiple copy of the same mesh may
have different weights), all the weights are uploaded to a uniform
buffer of 256 `f32`. We limit to 16 poses per mesh, and a total of 256
poses.
More literature:
* Old babylone.js implementation (vertex attribute-based):
https://www.eternalcoding.com/dev-log-1-morph-targets/
* Babylone.js implementation (similar to ours):
https://www.youtube.com/watch?v=LBPRmGgU0PE
* GPU gems 3:
https://developer.nvidia.com/gpugems/gpugems3/part-i-geometry/chapter-3-directx-10-blend-shapes-breaking-limits
* Development discord thread
https://discord.com/channels/691052431525675048/1083325980615114772
https://user-images.githubusercontent.com/26321040/231181046-3bca2ab2-d4d9-472e-8098-639f1871ce2e.mp4
https://github.com/bevyengine/bevy/assets/26321040/d2a0c544-0ef8-45cf-9f99-8c3792f5a258
## Acknowledgements
* Thanks to `storytold` for sponsoring the feature
* Thanks to `superdump` and `james7132` for guidance and help figuring
out stuff
## Future work
- Handling of less and more attributes (eg: animated uv, animated
arbitrary attributes)
- Dynamic pose allocation (so that zero-weighted poses aren't uploaded
to GPU for example, enables much more total poses)
- Better animation API, see #8357
----
## Changelog
- Add morph targets to bevy meshes
- Support up to 64 poses per mesh of individually up to 116508 vertices,
animation currently strictly limited to the position, normal and tangent
attributes.
- Load a morph target using `Mesh::set_morph_targets`
- Add `VisitMorphTargets` and `VisitMorphAttributes` traits to
`bevy_render`, this allows defining morph targets (a fairly complex and
nested data structure) through iterators (ie: single copy instead of
passing around buffers), see documentation of those traits for details
- Add `MorphWeights` component exported by `bevy_render`
- `MorphWeights` control mesh's morph target weights, blending between
various poses defined as morph targets.
- `MorphWeights` are directly inherited by direct children (single level
of hierarchy) of an entity. This allows controlling several mesh
primitives through a unique entity _as per GLTF spec_.
- Add `MorphTargetNames` component, naming each indices of loaded morph
targets.
- Load morph targets weights and buffers in `bevy_gltf`
- handle morph targets animations in `bevy_animation` (previously, it
was a `warn!` log)
- Add the `MorphStressTest.gltf` asset for morph targets testing, taken
from the glTF samples repo, CC0.
- Add morph target manipulation to `scene_viewer`
- Separate the animation code in `scene_viewer` from the rest of the
code, reducing `#[cfg(feature)]` noise
- Add the `morph_targets.rs` example to show off how to manipulate morph
targets, loading `MorpStressTest.gltf`
## Migration Guide
- (very specialized, unlikely to be touched by 3rd parties)
- `MeshPipeline` now has a single `mesh_layouts` field rather than
separate `mesh_layout` and `skinned_mesh_layout` fields. You should
handle all possible mesh bind group layouts in your implementation
- You should also handle properly the new `MORPH_TARGETS` shader def and
mesh pipeline key. A new function is exposed to make this easier:
`setup_moprh_and_skinning_defs`
- The `MeshBindGroup` is now `MeshBindGroups`, cached bind groups are
now accessed through the `get` method.
[1]: https://en.wikipedia.org/wiki/Morph_target_animation
[2]:
https://registry.khronos.org/glTF/specs/2.0/glTF-2.0.html#morph-targets
---------
Co-authored-by: François <mockersf@gmail.com>
Co-authored-by: Carter Anderson <mcanders1@gmail.com>
2023-06-22 20:00:01 +00:00
|
|
|
load_internal_asset!(app, MORPH_HANDLE, "morph.wgsl", Shader::from_wgsl);
|
Shader Imports. Decouple Mesh logic from PBR (#3137)
## Shader Imports
This adds "whole file" shader imports. These come in two flavors:
### Asset Path Imports
```rust
// /assets/shaders/custom.wgsl
#import "shaders/custom_material.wgsl"
[[stage(fragment)]]
fn fragment() -> [[location(0)]] vec4<f32> {
return get_color();
}
```
```rust
// /assets/shaders/custom_material.wgsl
[[block]]
struct CustomMaterial {
color: vec4<f32>;
};
[[group(1), binding(0)]]
var<uniform> material: CustomMaterial;
```
### Custom Path Imports
Enables defining custom import paths. These are intended to be used by crates to export shader functionality:
```rust
// bevy_pbr2/src/render/pbr.wgsl
#import bevy_pbr::mesh_view_bind_group
#import bevy_pbr::mesh_bind_group
[[block]]
struct StandardMaterial {
base_color: vec4<f32>;
emissive: vec4<f32>;
perceptual_roughness: f32;
metallic: f32;
reflectance: f32;
flags: u32;
};
/* rest of PBR fragment shader here */
```
```rust
impl Plugin for MeshRenderPlugin {
fn build(&self, app: &mut bevy_app::App) {
let mut shaders = app.world.get_resource_mut::<Assets<Shader>>().unwrap();
shaders.set_untracked(
MESH_BIND_GROUP_HANDLE,
Shader::from_wgsl(include_str!("mesh_bind_group.wgsl"))
.with_import_path("bevy_pbr::mesh_bind_group"),
);
shaders.set_untracked(
MESH_VIEW_BIND_GROUP_HANDLE,
Shader::from_wgsl(include_str!("mesh_view_bind_group.wgsl"))
.with_import_path("bevy_pbr::mesh_view_bind_group"),
);
```
By convention these should use rust-style module paths that start with the crate name. Ultimately we might enforce this convention.
Note that this feature implements _run time_ import resolution. Ultimately we should move the import logic into an asset preprocessor once Bevy gets support for that.
## Decouple Mesh Logic from PBR Logic via MeshRenderPlugin
This breaks out mesh rendering code from PBR material code, which improves the legibility of the code, decouples mesh logic from PBR logic, and opens the door for a future `MaterialPlugin<T: Material>` that handles all of the pipeline setup for arbitrary shader materials.
## Removed `RenderAsset<Shader>` in favor of extracting shaders into RenderPipelineCache
This simplifies the shader import implementation and removes the need to pass around `RenderAssets<Shader>`.
## RenderCommands are now fallible
This allows us to cleanly handle pipelines+shaders not being ready yet. We can abort a render command early in these cases, preventing bevy from trying to bind group / do draw calls for pipelines that couldn't be bound. This could also be used in the future for things like "components not existing on entities yet".
# Next Steps
* Investigate using Naga for "partial typed imports" (ex: `#import bevy_pbr::material::StandardMaterial`, which would import only the StandardMaterial struct)
* Implement `MaterialPlugin<T: Material>` for low-boilerplate custom material shaders
* Move shader import logic into the asset preprocessor once bevy gets support for that.
Fixes #3132
2021-11-18 03:45:02 +00:00
|
|
|
|
2022-01-08 10:39:43 +00:00
|
|
|
if let Ok(render_app) = app.get_sub_app_mut(RenderApp) {
|
|
|
|
render_app
|
2022-03-29 18:31:13 +00:00
|
|
|
.init_resource::<SkinnedMeshUniform>()
|
Add morph targets (#8158)
# Objective
- Add morph targets to `bevy_pbr` (closes #5756) & load them from glTF
- Supersedes #3722
- Fixes #6814
[Morph targets][1] (also known as shape interpolation, shape keys, or
blend shapes) allow animating individual vertices with fine grained
controls. This is typically used for facial expressions. By specifying
multiple poses as vertex offset, and providing a set of weight of each
pose, it is possible to define surprisingly realistic transitions
between poses. Blending between multiple poses also allow composition.
Morph targets are part of the [gltf standard][2] and are a feature of
Unity and Unreal, and babylone.js, it is only natural to implement them
in bevy.
## Solution
This implementation of morph targets uses a 3d texture where each pixel
is a component of an animated attribute. Each layer is a different
target. We use a 2d texture for each target, because the number of
attribute×components×animated vertices is expected to always exceed the
maximum pixel row size limit of webGL2. It copies fairly closely the way
skinning is implemented on the CPU side, while on the GPU side, the
shader morph target implementation is a relatively trivial detail.
We add an optional `morph_texture` to the `Mesh` struct. The
`morph_texture` is built through a method that accepts an iterator over
attribute buffers.
The `MorphWeights` component, user-accessible, controls the blend of
poses used by mesh instances (so that multiple copy of the same mesh may
have different weights), all the weights are uploaded to a uniform
buffer of 256 `f32`. We limit to 16 poses per mesh, and a total of 256
poses.
More literature:
* Old babylone.js implementation (vertex attribute-based):
https://www.eternalcoding.com/dev-log-1-morph-targets/
* Babylone.js implementation (similar to ours):
https://www.youtube.com/watch?v=LBPRmGgU0PE
* GPU gems 3:
https://developer.nvidia.com/gpugems/gpugems3/part-i-geometry/chapter-3-directx-10-blend-shapes-breaking-limits
* Development discord thread
https://discord.com/channels/691052431525675048/1083325980615114772
https://user-images.githubusercontent.com/26321040/231181046-3bca2ab2-d4d9-472e-8098-639f1871ce2e.mp4
https://github.com/bevyengine/bevy/assets/26321040/d2a0c544-0ef8-45cf-9f99-8c3792f5a258
## Acknowledgements
* Thanks to `storytold` for sponsoring the feature
* Thanks to `superdump` and `james7132` for guidance and help figuring
out stuff
## Future work
- Handling of less and more attributes (eg: animated uv, animated
arbitrary attributes)
- Dynamic pose allocation (so that zero-weighted poses aren't uploaded
to GPU for example, enables much more total poses)
- Better animation API, see #8357
----
## Changelog
- Add morph targets to bevy meshes
- Support up to 64 poses per mesh of individually up to 116508 vertices,
animation currently strictly limited to the position, normal and tangent
attributes.
- Load a morph target using `Mesh::set_morph_targets`
- Add `VisitMorphTargets` and `VisitMorphAttributes` traits to
`bevy_render`, this allows defining morph targets (a fairly complex and
nested data structure) through iterators (ie: single copy instead of
passing around buffers), see documentation of those traits for details
- Add `MorphWeights` component exported by `bevy_render`
- `MorphWeights` control mesh's morph target weights, blending between
various poses defined as morph targets.
- `MorphWeights` are directly inherited by direct children (single level
of hierarchy) of an entity. This allows controlling several mesh
primitives through a unique entity _as per GLTF spec_.
- Add `MorphTargetNames` component, naming each indices of loaded morph
targets.
- Load morph targets weights and buffers in `bevy_gltf`
- handle morph targets animations in `bevy_animation` (previously, it
was a `warn!` log)
- Add the `MorphStressTest.gltf` asset for morph targets testing, taken
from the glTF samples repo, CC0.
- Add morph target manipulation to `scene_viewer`
- Separate the animation code in `scene_viewer` from the rest of the
code, reducing `#[cfg(feature)]` noise
- Add the `morph_targets.rs` example to show off how to manipulate morph
targets, loading `MorpStressTest.gltf`
## Migration Guide
- (very specialized, unlikely to be touched by 3rd parties)
- `MeshPipeline` now has a single `mesh_layouts` field rather than
separate `mesh_layout` and `skinned_mesh_layout` fields. You should
handle all possible mesh bind group layouts in your implementation
- You should also handle properly the new `MORPH_TARGETS` shader def and
mesh pipeline key. A new function is exposed to make this easier:
`setup_moprh_and_skinning_defs`
- The `MeshBindGroup` is now `MeshBindGroups`, cached bind groups are
now accessed through the `get` method.
[1]: https://en.wikipedia.org/wiki/Morph_target_animation
[2]:
https://registry.khronos.org/glTF/specs/2.0/glTF-2.0.html#morph-targets
---------
Co-authored-by: François <mockersf@gmail.com>
Co-authored-by: Carter Anderson <mcanders1@gmail.com>
2023-06-22 20:00:01 +00:00
|
|
|
.init_resource::<MeshBindGroups>()
|
|
|
|
.init_resource::<MorphUniform>()
|
|
|
|
.add_systems(
|
|
|
|
ExtractSchedule,
|
|
|
|
(extract_meshes, extract_skinned_meshes, extract_morphs),
|
|
|
|
)
|
2023-03-18 01:45:34 +00:00
|
|
|
.add_systems(
|
|
|
|
Render,
|
|
|
|
(
|
Reorder render sets, refactor bevy_sprite to take advantage (#9236)
This is a continuation of this PR: #8062
# Objective
- Reorder render schedule sets to allow data preparation when phase item
order is known to support improved batching
- Part of the batching/instancing etc plan from here:
https://github.com/bevyengine/bevy/issues/89#issuecomment-1379249074
- The original idea came from @inodentry and proved to be a good one.
Thanks!
- Refactor `bevy_sprite` and `bevy_ui` to take advantage of the new
ordering
## Solution
- Move `Prepare` and `PrepareFlush` after `PhaseSortFlush`
- Add a `PrepareAssets` set that runs in parallel with other systems and
sets in the render schedule.
- Put prepare_assets systems in the `PrepareAssets` set
- If explicit dependencies are needed on Mesh or Material RenderAssets
then depend on the appropriate system.
- Add `ManageViews` and `ManageViewsFlush` sets between
`ExtractCommands` and Queue
- Move `queue_mesh*_bind_group` to the Prepare stage
- Rename them to `prepare_`
- Put systems that prepare resources (buffers, textures, etc.) into a
`PrepareResources` set inside `Prepare`
- Put the `prepare_..._bind_group` systems into a `PrepareBindGroup` set
after `PrepareResources`
- Move `prepare_lights` to the `ManageViews` set
- `prepare_lights` creates views and this must happen before `Queue`
- This system needs refactoring to stop handling all responsibilities
- Gather lights, sort, and create shadow map views. Store sorted light
entities in a resource
- Remove `BatchedPhaseItem`
- Replace `batch_range` with `batch_size` representing how many items to
skip after rendering the item or to skip the item entirely if
`batch_size` is 0.
- `queue_sprites` has been split into `queue_sprites` for queueing phase
items and `prepare_sprites` for batching after the `PhaseSort`
- `PhaseItem`s are still inserted in `queue_sprites`
- After sorting adjacent compatible sprite phase items are accumulated
into `SpriteBatch` components on the first entity of each batch,
containing a range of vertex indices. The associated `PhaseItem`'s
`batch_size` is updated appropriately.
- `SpriteBatch` items are then drawn skipping over the other items in
the batch based on the value in `batch_size`
- A very similar refactor was performed on `bevy_ui`
---
## Changelog
Changed:
- Reordered and reworked render app schedule sets. The main change is
that data is extracted, queued, sorted, and then prepared when the order
of data is known.
- Refactor `bevy_sprite` and `bevy_ui` to take advantage of the
reordering.
## Migration Guide
- Assets such as materials and meshes should now be created in
`PrepareAssets` e.g. `prepare_assets<Mesh>`
- Queueing entities to `RenderPhase`s continues to be done in `Queue`
e.g. `queue_sprites`
- Preparing resources (textures, buffers, etc.) should now be done in
`PrepareResources`, e.g. `prepare_prepass_textures`,
`prepare_mesh_uniforms`
- Prepare bind groups should now be done in `PrepareBindGroups` e.g.
`prepare_mesh_bind_group`
- Any batching or instancing can now be done in `Prepare` where the
order of the phase items is known e.g. `prepare_sprites`
## Next Steps
- Introduce some generic mechanism to ensure items that can be batched
are grouped in the phase item order, currently you could easily have
`[sprite at z 0, mesh at z 0, sprite at z 0]` preventing batching.
- Investigate improved orderings for building the MeshUniform buffer
- Implementing batching across the rest of bevy
---------
Co-authored-by: Robert Swain <robert.swain@gmail.com>
Co-authored-by: robtfm <50659922+robtfm@users.noreply.github.com>
2023-08-27 14:33:49 +00:00
|
|
|
prepare_mesh_uniforms.in_set(RenderSet::PrepareResources),
|
|
|
|
prepare_skinned_meshes.in_set(RenderSet::PrepareResources),
|
|
|
|
prepare_morphs.in_set(RenderSet::PrepareResources),
|
|
|
|
prepare_mesh_bind_group.in_set(RenderSet::PrepareBindGroups),
|
|
|
|
prepare_mesh_view_bind_groups.in_set(RenderSet::PrepareBindGroups),
|
2023-03-18 01:45:34 +00:00
|
|
|
),
|
|
|
|
);
|
2022-01-08 10:39:43 +00:00
|
|
|
}
|
Shader Imports. Decouple Mesh logic from PBR (#3137)
## Shader Imports
This adds "whole file" shader imports. These come in two flavors:
### Asset Path Imports
```rust
// /assets/shaders/custom.wgsl
#import "shaders/custom_material.wgsl"
[[stage(fragment)]]
fn fragment() -> [[location(0)]] vec4<f32> {
return get_color();
}
```
```rust
// /assets/shaders/custom_material.wgsl
[[block]]
struct CustomMaterial {
color: vec4<f32>;
};
[[group(1), binding(0)]]
var<uniform> material: CustomMaterial;
```
### Custom Path Imports
Enables defining custom import paths. These are intended to be used by crates to export shader functionality:
```rust
// bevy_pbr2/src/render/pbr.wgsl
#import bevy_pbr::mesh_view_bind_group
#import bevy_pbr::mesh_bind_group
[[block]]
struct StandardMaterial {
base_color: vec4<f32>;
emissive: vec4<f32>;
perceptual_roughness: f32;
metallic: f32;
reflectance: f32;
flags: u32;
};
/* rest of PBR fragment shader here */
```
```rust
impl Plugin for MeshRenderPlugin {
fn build(&self, app: &mut bevy_app::App) {
let mut shaders = app.world.get_resource_mut::<Assets<Shader>>().unwrap();
shaders.set_untracked(
MESH_BIND_GROUP_HANDLE,
Shader::from_wgsl(include_str!("mesh_bind_group.wgsl"))
.with_import_path("bevy_pbr::mesh_bind_group"),
);
shaders.set_untracked(
MESH_VIEW_BIND_GROUP_HANDLE,
Shader::from_wgsl(include_str!("mesh_view_bind_group.wgsl"))
.with_import_path("bevy_pbr::mesh_view_bind_group"),
);
```
By convention these should use rust-style module paths that start with the crate name. Ultimately we might enforce this convention.
Note that this feature implements _run time_ import resolution. Ultimately we should move the import logic into an asset preprocessor once Bevy gets support for that.
## Decouple Mesh Logic from PBR Logic via MeshRenderPlugin
This breaks out mesh rendering code from PBR material code, which improves the legibility of the code, decouples mesh logic from PBR logic, and opens the door for a future `MaterialPlugin<T: Material>` that handles all of the pipeline setup for arbitrary shader materials.
## Removed `RenderAsset<Shader>` in favor of extracting shaders into RenderPipelineCache
This simplifies the shader import implementation and removes the need to pass around `RenderAssets<Shader>`.
## RenderCommands are now fallible
This allows us to cleanly handle pipelines+shaders not being ready yet. We can abort a render command early in these cases, preventing bevy from trying to bind group / do draw calls for pipelines that couldn't be bound. This could also be used in the future for things like "components not existing on entities yet".
# Next Steps
* Investigate using Naga for "partial typed imports" (ex: `#import bevy_pbr::material::StandardMaterial`, which would import only the StandardMaterial struct)
* Implement `MaterialPlugin<T: Material>` for low-boilerplate custom material shaders
* Move shader import logic into the asset preprocessor once bevy gets support for that.
Fixes #3132
2021-11-18 03:45:02 +00:00
|
|
|
}
|
Webgpu support (#8336)
# Objective
- Support WebGPU
- alternative to #5027 that doesn't need any async / await
- fixes #8315
- Surprise fix #7318
## Solution
### For async renderer initialisation
- Update the plugin lifecycle:
- app builds the plugin
- calls `plugin.build`
- registers the plugin
- app starts the event loop
- event loop waits for `ready` of all registered plugins in the same
order
- returns `true` by default
- then call all `finish` then all `cleanup` in the same order as
registered
- then execute the schedule
In the case of the renderer, to avoid anything async:
- building the renderer plugin creates a detached task that will send
back the initialised renderer through a mutex in a resource
- `ready` will wait for the renderer to be present in the resource
- `finish` will take that renderer and place it in the expected
resources by other plugins
- other plugins (that expect the renderer to be available) `finish` are
called and they are able to set up their pipelines
- `cleanup` is called, only custom one is still for pipeline rendering
### For WebGPU support
- update the `build-wasm-example` script to support passing `--api
webgpu` that will build the example with WebGPU support
- feature for webgl2 was always enabled when building for wasm. it's now
in the default feature list and enabled on all platforms, so check for
this feature must also check that the target_arch is `wasm32`
---
## Migration Guide
- `Plugin::setup` has been renamed `Plugin::cleanup`
- `Plugin::finish` has been added, and plugins adding pipelines should
do it in this function instead of `Plugin::build`
```rust
// Before
impl Plugin for MyPlugin {
fn build(&self, app: &mut App) {
app.insert_resource::<MyResource>
.add_systems(Update, my_system);
let render_app = match app.get_sub_app_mut(RenderApp) {
Ok(render_app) => render_app,
Err(_) => return,
};
render_app
.init_resource::<RenderResourceNeedingDevice>()
.init_resource::<OtherRenderResource>();
}
}
// After
impl Plugin for MyPlugin {
fn build(&self, app: &mut App) {
app.insert_resource::<MyResource>
.add_systems(Update, my_system);
let render_app = match app.get_sub_app_mut(RenderApp) {
Ok(render_app) => render_app,
Err(_) => return,
};
render_app
.init_resource::<OtherRenderResource>();
}
fn finish(&self, app: &mut App) {
let render_app = match app.get_sub_app_mut(RenderApp) {
Ok(render_app) => render_app,
Err(_) => return,
};
render_app
.init_resource::<RenderResourceNeedingDevice>();
}
}
```
2023-05-04 22:07:57 +00:00
|
|
|
|
|
|
|
fn finish(&self, app: &mut bevy_app::App) {
|
Use GpuArrayBuffer for MeshUniform (#9254)
# Objective
- Reduce the number of rebindings to enable batching of draw commands
## Solution
- Use the new `GpuArrayBuffer` for `MeshUniform` data to store all
`MeshUniform` data in arrays within fewer bindings
- Sort opaque/alpha mask prepass, opaque/alpha mask main, and shadow
phases also by the batch per-object data binding dynamic offset to
improve performance on WebGL2.
---
## Changelog
- Changed: Per-object `MeshUniform` data is now managed by
`GpuArrayBuffer` as arrays in buffers that need to be indexed into.
## Migration Guide
Accessing the `model` member of an individual mesh object's shader
`Mesh` struct the old way where each `MeshUniform` was stored at its own
dynamic offset:
```rust
struct Vertex {
@location(0) position: vec3<f32>,
};
fn vertex(vertex: Vertex) -> VertexOutput {
var out: VertexOutput;
out.clip_position = mesh_position_local_to_clip(
mesh.model,
vec4<f32>(vertex.position, 1.0)
);
return out;
}
```
The new way where one needs to index into the array of `Mesh`es for the
batch:
```rust
struct Vertex {
@builtin(instance_index) instance_index: u32,
@location(0) position: vec3<f32>,
};
fn vertex(vertex: Vertex) -> VertexOutput {
var out: VertexOutput;
out.clip_position = mesh_position_local_to_clip(
mesh[vertex.instance_index].model,
vec4<f32>(vertex.position, 1.0)
);
return out;
}
```
Note that using the instance_index is the default way to pass the
per-object index into the shader, but if you wish to do custom rendering
approaches you can pass it in however you like.
---------
Co-authored-by: robtfm <50659922+robtfm@users.noreply.github.com>
Co-authored-by: Elabajaba <Elabajaba@users.noreply.github.com>
2023-07-30 13:17:08 +00:00
|
|
|
let mut mesh_bindings_shader_defs = Vec::with_capacity(1);
|
|
|
|
|
Webgpu support (#8336)
# Objective
- Support WebGPU
- alternative to #5027 that doesn't need any async / await
- fixes #8315
- Surprise fix #7318
## Solution
### For async renderer initialisation
- Update the plugin lifecycle:
- app builds the plugin
- calls `plugin.build`
- registers the plugin
- app starts the event loop
- event loop waits for `ready` of all registered plugins in the same
order
- returns `true` by default
- then call all `finish` then all `cleanup` in the same order as
registered
- then execute the schedule
In the case of the renderer, to avoid anything async:
- building the renderer plugin creates a detached task that will send
back the initialised renderer through a mutex in a resource
- `ready` will wait for the renderer to be present in the resource
- `finish` will take that renderer and place it in the expected
resources by other plugins
- other plugins (that expect the renderer to be available) `finish` are
called and they are able to set up their pipelines
- `cleanup` is called, only custom one is still for pipeline rendering
### For WebGPU support
- update the `build-wasm-example` script to support passing `--api
webgpu` that will build the example with WebGPU support
- feature for webgl2 was always enabled when building for wasm. it's now
in the default feature list and enabled on all platforms, so check for
this feature must also check that the target_arch is `wasm32`
---
## Migration Guide
- `Plugin::setup` has been renamed `Plugin::cleanup`
- `Plugin::finish` has been added, and plugins adding pipelines should
do it in this function instead of `Plugin::build`
```rust
// Before
impl Plugin for MyPlugin {
fn build(&self, app: &mut App) {
app.insert_resource::<MyResource>
.add_systems(Update, my_system);
let render_app = match app.get_sub_app_mut(RenderApp) {
Ok(render_app) => render_app,
Err(_) => return,
};
render_app
.init_resource::<RenderResourceNeedingDevice>()
.init_resource::<OtherRenderResource>();
}
}
// After
impl Plugin for MyPlugin {
fn build(&self, app: &mut App) {
app.insert_resource::<MyResource>
.add_systems(Update, my_system);
let render_app = match app.get_sub_app_mut(RenderApp) {
Ok(render_app) => render_app,
Err(_) => return,
};
render_app
.init_resource::<OtherRenderResource>();
}
fn finish(&self, app: &mut App) {
let render_app = match app.get_sub_app_mut(RenderApp) {
Ok(render_app) => render_app,
Err(_) => return,
};
render_app
.init_resource::<RenderResourceNeedingDevice>();
}
}
```
2023-05-04 22:07:57 +00:00
|
|
|
if let Ok(render_app) = app.get_sub_app_mut(RenderApp) {
|
Use GpuArrayBuffer for MeshUniform (#9254)
# Objective
- Reduce the number of rebindings to enable batching of draw commands
## Solution
- Use the new `GpuArrayBuffer` for `MeshUniform` data to store all
`MeshUniform` data in arrays within fewer bindings
- Sort opaque/alpha mask prepass, opaque/alpha mask main, and shadow
phases also by the batch per-object data binding dynamic offset to
improve performance on WebGL2.
---
## Changelog
- Changed: Per-object `MeshUniform` data is now managed by
`GpuArrayBuffer` as arrays in buffers that need to be indexed into.
## Migration Guide
Accessing the `model` member of an individual mesh object's shader
`Mesh` struct the old way where each `MeshUniform` was stored at its own
dynamic offset:
```rust
struct Vertex {
@location(0) position: vec3<f32>,
};
fn vertex(vertex: Vertex) -> VertexOutput {
var out: VertexOutput;
out.clip_position = mesh_position_local_to_clip(
mesh.model,
vec4<f32>(vertex.position, 1.0)
);
return out;
}
```
The new way where one needs to index into the array of `Mesh`es for the
batch:
```rust
struct Vertex {
@builtin(instance_index) instance_index: u32,
@location(0) position: vec3<f32>,
};
fn vertex(vertex: Vertex) -> VertexOutput {
var out: VertexOutput;
out.clip_position = mesh_position_local_to_clip(
mesh[vertex.instance_index].model,
vec4<f32>(vertex.position, 1.0)
);
return out;
}
```
Note that using the instance_index is the default way to pass the
per-object index into the shader, but if you wish to do custom rendering
approaches you can pass it in however you like.
---------
Co-authored-by: robtfm <50659922+robtfm@users.noreply.github.com>
Co-authored-by: Elabajaba <Elabajaba@users.noreply.github.com>
2023-07-30 13:17:08 +00:00
|
|
|
if let Some(per_object_buffer_batch_size) = GpuArrayBuffer::<MeshUniform>::batch_size(
|
|
|
|
render_app.world.resource::<RenderDevice>(),
|
|
|
|
) {
|
|
|
|
mesh_bindings_shader_defs.push(ShaderDefVal::UInt(
|
|
|
|
"PER_OBJECT_BUFFER_BATCH_SIZE".into(),
|
|
|
|
per_object_buffer_batch_size,
|
|
|
|
));
|
|
|
|
}
|
|
|
|
|
Reduce the size of MeshUniform to improve performance (#9416)
# Objective
- Significantly reduce the size of MeshUniform by only including
necessary data.
## Solution
Local to world, model transforms are affine. This means they only need a
4x3 matrix to represent them.
`MeshUniform` stores the current, and previous model transforms, and the
inverse transpose of the current model transform, all as 4x4 matrices.
Instead we can store the current, and previous model transforms as 4x3
matrices, and we only need the upper-left 3x3 part of the inverse
transpose of the current model transform. This change allows us to
reduce the serialized MeshUniform size from 208 bytes to 144 bytes,
which is over a 30% saving in data to serialize, and VRAM bandwidth and
space.
## Benchmarks
On an M1 Max, running `many_cubes -- sphere`, main is in yellow, this PR
is in red:
<img width="1484" alt="Screenshot 2023-08-11 at 02 36 43"
src="https://github.com/bevyengine/bevy/assets/302146/7d99c7b3-f2bb-4004-a8d0-4c00f755cb0d">
A reduction in frame time of ~14%.
---
## Changelog
- Changed: Redefined `MeshUniform` to improve performance by using 4x3
affine transforms and reconstructing 4x4 matrices in the shader. Helper
functions were added to `bevy_pbr::mesh_functions` to unpack the data.
`affine_to_square` converts the packed 4x3 in 3x4 matrix data to a 4x4
matrix. `mat2x4_f32_to_mat3x3` converts the 3x3 in mat2x4 + f32 matrix
data back into a 3x3.
## Migration Guide
Shader code before:
```
var model = mesh[instance_index].model;
```
Shader code after:
```
#import bevy_pbr::mesh_functions affine_to_square
var model = affine_to_square(mesh[instance_index].model);
```
2023-08-15 06:00:23 +00:00
|
|
|
render_app
|
|
|
|
.insert_resource(GpuArrayBuffer::<MeshUniform>::new(
|
|
|
|
render_app.world.resource::<RenderDevice>(),
|
|
|
|
))
|
|
|
|
.init_resource::<MeshPipeline>();
|
Webgpu support (#8336)
# Objective
- Support WebGPU
- alternative to #5027 that doesn't need any async / await
- fixes #8315
- Surprise fix #7318
## Solution
### For async renderer initialisation
- Update the plugin lifecycle:
- app builds the plugin
- calls `plugin.build`
- registers the plugin
- app starts the event loop
- event loop waits for `ready` of all registered plugins in the same
order
- returns `true` by default
- then call all `finish` then all `cleanup` in the same order as
registered
- then execute the schedule
In the case of the renderer, to avoid anything async:
- building the renderer plugin creates a detached task that will send
back the initialised renderer through a mutex in a resource
- `ready` will wait for the renderer to be present in the resource
- `finish` will take that renderer and place it in the expected
resources by other plugins
- other plugins (that expect the renderer to be available) `finish` are
called and they are able to set up their pipelines
- `cleanup` is called, only custom one is still for pipeline rendering
### For WebGPU support
- update the `build-wasm-example` script to support passing `--api
webgpu` that will build the example with WebGPU support
- feature for webgl2 was always enabled when building for wasm. it's now
in the default feature list and enabled on all platforms, so check for
this feature must also check that the target_arch is `wasm32`
---
## Migration Guide
- `Plugin::setup` has been renamed `Plugin::cleanup`
- `Plugin::finish` has been added, and plugins adding pipelines should
do it in this function instead of `Plugin::build`
```rust
// Before
impl Plugin for MyPlugin {
fn build(&self, app: &mut App) {
app.insert_resource::<MyResource>
.add_systems(Update, my_system);
let render_app = match app.get_sub_app_mut(RenderApp) {
Ok(render_app) => render_app,
Err(_) => return,
};
render_app
.init_resource::<RenderResourceNeedingDevice>()
.init_resource::<OtherRenderResource>();
}
}
// After
impl Plugin for MyPlugin {
fn build(&self, app: &mut App) {
app.insert_resource::<MyResource>
.add_systems(Update, my_system);
let render_app = match app.get_sub_app_mut(RenderApp) {
Ok(render_app) => render_app,
Err(_) => return,
};
render_app
.init_resource::<OtherRenderResource>();
}
fn finish(&self, app: &mut App) {
let render_app = match app.get_sub_app_mut(RenderApp) {
Ok(render_app) => render_app,
Err(_) => return,
};
render_app
.init_resource::<RenderResourceNeedingDevice>();
}
}
```
2023-05-04 22:07:57 +00:00
|
|
|
}
|
Use GpuArrayBuffer for MeshUniform (#9254)
# Objective
- Reduce the number of rebindings to enable batching of draw commands
## Solution
- Use the new `GpuArrayBuffer` for `MeshUniform` data to store all
`MeshUniform` data in arrays within fewer bindings
- Sort opaque/alpha mask prepass, opaque/alpha mask main, and shadow
phases also by the batch per-object data binding dynamic offset to
improve performance on WebGL2.
---
## Changelog
- Changed: Per-object `MeshUniform` data is now managed by
`GpuArrayBuffer` as arrays in buffers that need to be indexed into.
## Migration Guide
Accessing the `model` member of an individual mesh object's shader
`Mesh` struct the old way where each `MeshUniform` was stored at its own
dynamic offset:
```rust
struct Vertex {
@location(0) position: vec3<f32>,
};
fn vertex(vertex: Vertex) -> VertexOutput {
var out: VertexOutput;
out.clip_position = mesh_position_local_to_clip(
mesh.model,
vec4<f32>(vertex.position, 1.0)
);
return out;
}
```
The new way where one needs to index into the array of `Mesh`es for the
batch:
```rust
struct Vertex {
@builtin(instance_index) instance_index: u32,
@location(0) position: vec3<f32>,
};
fn vertex(vertex: Vertex) -> VertexOutput {
var out: VertexOutput;
out.clip_position = mesh_position_local_to_clip(
mesh[vertex.instance_index].model,
vec4<f32>(vertex.position, 1.0)
);
return out;
}
```
Note that using the instance_index is the default way to pass the
per-object index into the shader, but if you wish to do custom rendering
approaches you can pass it in however you like.
---------
Co-authored-by: robtfm <50659922+robtfm@users.noreply.github.com>
Co-authored-by: Elabajaba <Elabajaba@users.noreply.github.com>
2023-07-30 13:17:08 +00:00
|
|
|
|
|
|
|
// Load the mesh_bindings shader module here as it depends on runtime information about
|
|
|
|
// whether storage buffers are supported, or the maximum uniform buffer binding size.
|
|
|
|
load_internal_asset!(
|
|
|
|
app,
|
|
|
|
MESH_BINDINGS_HANDLE,
|
|
|
|
"mesh_bindings.wgsl",
|
|
|
|
Shader::from_wgsl_with_defs,
|
|
|
|
mesh_bindings_shader_defs
|
|
|
|
);
|
Webgpu support (#8336)
# Objective
- Support WebGPU
- alternative to #5027 that doesn't need any async / await
- fixes #8315
- Surprise fix #7318
## Solution
### For async renderer initialisation
- Update the plugin lifecycle:
- app builds the plugin
- calls `plugin.build`
- registers the plugin
- app starts the event loop
- event loop waits for `ready` of all registered plugins in the same
order
- returns `true` by default
- then call all `finish` then all `cleanup` in the same order as
registered
- then execute the schedule
In the case of the renderer, to avoid anything async:
- building the renderer plugin creates a detached task that will send
back the initialised renderer through a mutex in a resource
- `ready` will wait for the renderer to be present in the resource
- `finish` will take that renderer and place it in the expected
resources by other plugins
- other plugins (that expect the renderer to be available) `finish` are
called and they are able to set up their pipelines
- `cleanup` is called, only custom one is still for pipeline rendering
### For WebGPU support
- update the `build-wasm-example` script to support passing `--api
webgpu` that will build the example with WebGPU support
- feature for webgl2 was always enabled when building for wasm. it's now
in the default feature list and enabled on all platforms, so check for
this feature must also check that the target_arch is `wasm32`
---
## Migration Guide
- `Plugin::setup` has been renamed `Plugin::cleanup`
- `Plugin::finish` has been added, and plugins adding pipelines should
do it in this function instead of `Plugin::build`
```rust
// Before
impl Plugin for MyPlugin {
fn build(&self, app: &mut App) {
app.insert_resource::<MyResource>
.add_systems(Update, my_system);
let render_app = match app.get_sub_app_mut(RenderApp) {
Ok(render_app) => render_app,
Err(_) => return,
};
render_app
.init_resource::<RenderResourceNeedingDevice>()
.init_resource::<OtherRenderResource>();
}
}
// After
impl Plugin for MyPlugin {
fn build(&self, app: &mut App) {
app.insert_resource::<MyResource>
.add_systems(Update, my_system);
let render_app = match app.get_sub_app_mut(RenderApp) {
Ok(render_app) => render_app,
Err(_) => return,
};
render_app
.init_resource::<OtherRenderResource>();
}
fn finish(&self, app: &mut App) {
let render_app = match app.get_sub_app_mut(RenderApp) {
Ok(render_app) => render_app,
Err(_) => return,
};
render_app
.init_resource::<RenderResourceNeedingDevice>();
}
}
```
2023-05-04 22:07:57 +00:00
|
|
|
}
|
Shader Imports. Decouple Mesh logic from PBR (#3137)
## Shader Imports
This adds "whole file" shader imports. These come in two flavors:
### Asset Path Imports
```rust
// /assets/shaders/custom.wgsl
#import "shaders/custom_material.wgsl"
[[stage(fragment)]]
fn fragment() -> [[location(0)]] vec4<f32> {
return get_color();
}
```
```rust
// /assets/shaders/custom_material.wgsl
[[block]]
struct CustomMaterial {
color: vec4<f32>;
};
[[group(1), binding(0)]]
var<uniform> material: CustomMaterial;
```
### Custom Path Imports
Enables defining custom import paths. These are intended to be used by crates to export shader functionality:
```rust
// bevy_pbr2/src/render/pbr.wgsl
#import bevy_pbr::mesh_view_bind_group
#import bevy_pbr::mesh_bind_group
[[block]]
struct StandardMaterial {
base_color: vec4<f32>;
emissive: vec4<f32>;
perceptual_roughness: f32;
metallic: f32;
reflectance: f32;
flags: u32;
};
/* rest of PBR fragment shader here */
```
```rust
impl Plugin for MeshRenderPlugin {
fn build(&self, app: &mut bevy_app::App) {
let mut shaders = app.world.get_resource_mut::<Assets<Shader>>().unwrap();
shaders.set_untracked(
MESH_BIND_GROUP_HANDLE,
Shader::from_wgsl(include_str!("mesh_bind_group.wgsl"))
.with_import_path("bevy_pbr::mesh_bind_group"),
);
shaders.set_untracked(
MESH_VIEW_BIND_GROUP_HANDLE,
Shader::from_wgsl(include_str!("mesh_view_bind_group.wgsl"))
.with_import_path("bevy_pbr::mesh_view_bind_group"),
);
```
By convention these should use rust-style module paths that start with the crate name. Ultimately we might enforce this convention.
Note that this feature implements _run time_ import resolution. Ultimately we should move the import logic into an asset preprocessor once Bevy gets support for that.
## Decouple Mesh Logic from PBR Logic via MeshRenderPlugin
This breaks out mesh rendering code from PBR material code, which improves the legibility of the code, decouples mesh logic from PBR logic, and opens the door for a future `MaterialPlugin<T: Material>` that handles all of the pipeline setup for arbitrary shader materials.
## Removed `RenderAsset<Shader>` in favor of extracting shaders into RenderPipelineCache
This simplifies the shader import implementation and removes the need to pass around `RenderAssets<Shader>`.
## RenderCommands are now fallible
This allows us to cleanly handle pipelines+shaders not being ready yet. We can abort a render command early in these cases, preventing bevy from trying to bind group / do draw calls for pipelines that couldn't be bound. This could also be used in the future for things like "components not existing on entities yet".
# Next Steps
* Investigate using Naga for "partial typed imports" (ex: `#import bevy_pbr::material::StandardMaterial`, which would import only the StandardMaterial struct)
* Implement `MaterialPlugin<T: Material>` for low-boilerplate custom material shaders
* Move shader import logic into the asset preprocessor once bevy gets support for that.
Fixes #3132
2021-11-18 03:45:02 +00:00
|
|
|
}
|
|
|
|
|
Reduce the size of MeshUniform to improve performance (#9416)
# Objective
- Significantly reduce the size of MeshUniform by only including
necessary data.
## Solution
Local to world, model transforms are affine. This means they only need a
4x3 matrix to represent them.
`MeshUniform` stores the current, and previous model transforms, and the
inverse transpose of the current model transform, all as 4x4 matrices.
Instead we can store the current, and previous model transforms as 4x3
matrices, and we only need the upper-left 3x3 part of the inverse
transpose of the current model transform. This change allows us to
reduce the serialized MeshUniform size from 208 bytes to 144 bytes,
which is over a 30% saving in data to serialize, and VRAM bandwidth and
space.
## Benchmarks
On an M1 Max, running `many_cubes -- sphere`, main is in yellow, this PR
is in red:
<img width="1484" alt="Screenshot 2023-08-11 at 02 36 43"
src="https://github.com/bevyengine/bevy/assets/302146/7d99c7b3-f2bb-4004-a8d0-4c00f755cb0d">
A reduction in frame time of ~14%.
---
## Changelog
- Changed: Redefined `MeshUniform` to improve performance by using 4x3
affine transforms and reconstructing 4x4 matrices in the shader. Helper
functions were added to `bevy_pbr::mesh_functions` to unpack the data.
`affine_to_square` converts the packed 4x3 in 3x4 matrix data to a 4x4
matrix. `mat2x4_f32_to_mat3x3` converts the 3x3 in mat2x4 + f32 matrix
data back into a 3x3.
## Migration Guide
Shader code before:
```
var model = mesh[instance_index].model;
```
Shader code after:
```
#import bevy_pbr::mesh_functions affine_to_square
var model = affine_to_square(mesh[instance_index].model);
```
2023-08-15 06:00:23 +00:00
|
|
|
#[derive(Component)]
|
|
|
|
pub struct MeshTransforms {
|
|
|
|
pub transform: Affine3,
|
|
|
|
pub previous_transform: Affine3,
|
|
|
|
pub flags: u32,
|
|
|
|
}
|
|
|
|
|
|
|
|
#[derive(ShaderType, Clone)]
|
Shader Imports. Decouple Mesh logic from PBR (#3137)
## Shader Imports
This adds "whole file" shader imports. These come in two flavors:
### Asset Path Imports
```rust
// /assets/shaders/custom.wgsl
#import "shaders/custom_material.wgsl"
[[stage(fragment)]]
fn fragment() -> [[location(0)]] vec4<f32> {
return get_color();
}
```
```rust
// /assets/shaders/custom_material.wgsl
[[block]]
struct CustomMaterial {
color: vec4<f32>;
};
[[group(1), binding(0)]]
var<uniform> material: CustomMaterial;
```
### Custom Path Imports
Enables defining custom import paths. These are intended to be used by crates to export shader functionality:
```rust
// bevy_pbr2/src/render/pbr.wgsl
#import bevy_pbr::mesh_view_bind_group
#import bevy_pbr::mesh_bind_group
[[block]]
struct StandardMaterial {
base_color: vec4<f32>;
emissive: vec4<f32>;
perceptual_roughness: f32;
metallic: f32;
reflectance: f32;
flags: u32;
};
/* rest of PBR fragment shader here */
```
```rust
impl Plugin for MeshRenderPlugin {
fn build(&self, app: &mut bevy_app::App) {
let mut shaders = app.world.get_resource_mut::<Assets<Shader>>().unwrap();
shaders.set_untracked(
MESH_BIND_GROUP_HANDLE,
Shader::from_wgsl(include_str!("mesh_bind_group.wgsl"))
.with_import_path("bevy_pbr::mesh_bind_group"),
);
shaders.set_untracked(
MESH_VIEW_BIND_GROUP_HANDLE,
Shader::from_wgsl(include_str!("mesh_view_bind_group.wgsl"))
.with_import_path("bevy_pbr::mesh_view_bind_group"),
);
```
By convention these should use rust-style module paths that start with the crate name. Ultimately we might enforce this convention.
Note that this feature implements _run time_ import resolution. Ultimately we should move the import logic into an asset preprocessor once Bevy gets support for that.
## Decouple Mesh Logic from PBR Logic via MeshRenderPlugin
This breaks out mesh rendering code from PBR material code, which improves the legibility of the code, decouples mesh logic from PBR logic, and opens the door for a future `MaterialPlugin<T: Material>` that handles all of the pipeline setup for arbitrary shader materials.
## Removed `RenderAsset<Shader>` in favor of extracting shaders into RenderPipelineCache
This simplifies the shader import implementation and removes the need to pass around `RenderAssets<Shader>`.
## RenderCommands are now fallible
This allows us to cleanly handle pipelines+shaders not being ready yet. We can abort a render command early in these cases, preventing bevy from trying to bind group / do draw calls for pipelines that couldn't be bound. This could also be used in the future for things like "components not existing on entities yet".
# Next Steps
* Investigate using Naga for "partial typed imports" (ex: `#import bevy_pbr::material::StandardMaterial`, which would import only the StandardMaterial struct)
* Implement `MaterialPlugin<T: Material>` for low-boilerplate custom material shaders
* Move shader import logic into the asset preprocessor once bevy gets support for that.
Fixes #3132
2021-11-18 03:45:02 +00:00
|
|
|
pub struct MeshUniform {
|
Reduce the size of MeshUniform to improve performance (#9416)
# Objective
- Significantly reduce the size of MeshUniform by only including
necessary data.
## Solution
Local to world, model transforms are affine. This means they only need a
4x3 matrix to represent them.
`MeshUniform` stores the current, and previous model transforms, and the
inverse transpose of the current model transform, all as 4x4 matrices.
Instead we can store the current, and previous model transforms as 4x3
matrices, and we only need the upper-left 3x3 part of the inverse
transpose of the current model transform. This change allows us to
reduce the serialized MeshUniform size from 208 bytes to 144 bytes,
which is over a 30% saving in data to serialize, and VRAM bandwidth and
space.
## Benchmarks
On an M1 Max, running `many_cubes -- sphere`, main is in yellow, this PR
is in red:
<img width="1484" alt="Screenshot 2023-08-11 at 02 36 43"
src="https://github.com/bevyengine/bevy/assets/302146/7d99c7b3-f2bb-4004-a8d0-4c00f755cb0d">
A reduction in frame time of ~14%.
---
## Changelog
- Changed: Redefined `MeshUniform` to improve performance by using 4x3
affine transforms and reconstructing 4x4 matrices in the shader. Helper
functions were added to `bevy_pbr::mesh_functions` to unpack the data.
`affine_to_square` converts the packed 4x3 in 3x4 matrix data to a 4x4
matrix. `mat2x4_f32_to_mat3x3` converts the 3x3 in mat2x4 + f32 matrix
data back into a 3x3.
## Migration Guide
Shader code before:
```
var model = mesh[instance_index].model;
```
Shader code after:
```
#import bevy_pbr::mesh_functions affine_to_square
var model = affine_to_square(mesh[instance_index].model);
```
2023-08-15 06:00:23 +00:00
|
|
|
// Affine 4x3 matrices transposed to 3x4
|
|
|
|
pub transform: [Vec4; 3],
|
|
|
|
pub previous_transform: [Vec4; 3],
|
|
|
|
// 3x3 matrix packed in mat2x4 and f32 as:
|
|
|
|
// [0].xyz, [1].x,
|
|
|
|
// [1].yz, [2].xy
|
|
|
|
// [2].z
|
|
|
|
pub inverse_transpose_model_a: [Vec4; 2],
|
|
|
|
pub inverse_transpose_model_b: f32,
|
Shader Imports. Decouple Mesh logic from PBR (#3137)
## Shader Imports
This adds "whole file" shader imports. These come in two flavors:
### Asset Path Imports
```rust
// /assets/shaders/custom.wgsl
#import "shaders/custom_material.wgsl"
[[stage(fragment)]]
fn fragment() -> [[location(0)]] vec4<f32> {
return get_color();
}
```
```rust
// /assets/shaders/custom_material.wgsl
[[block]]
struct CustomMaterial {
color: vec4<f32>;
};
[[group(1), binding(0)]]
var<uniform> material: CustomMaterial;
```
### Custom Path Imports
Enables defining custom import paths. These are intended to be used by crates to export shader functionality:
```rust
// bevy_pbr2/src/render/pbr.wgsl
#import bevy_pbr::mesh_view_bind_group
#import bevy_pbr::mesh_bind_group
[[block]]
struct StandardMaterial {
base_color: vec4<f32>;
emissive: vec4<f32>;
perceptual_roughness: f32;
metallic: f32;
reflectance: f32;
flags: u32;
};
/* rest of PBR fragment shader here */
```
```rust
impl Plugin for MeshRenderPlugin {
fn build(&self, app: &mut bevy_app::App) {
let mut shaders = app.world.get_resource_mut::<Assets<Shader>>().unwrap();
shaders.set_untracked(
MESH_BIND_GROUP_HANDLE,
Shader::from_wgsl(include_str!("mesh_bind_group.wgsl"))
.with_import_path("bevy_pbr::mesh_bind_group"),
);
shaders.set_untracked(
MESH_VIEW_BIND_GROUP_HANDLE,
Shader::from_wgsl(include_str!("mesh_view_bind_group.wgsl"))
.with_import_path("bevy_pbr::mesh_view_bind_group"),
);
```
By convention these should use rust-style module paths that start with the crate name. Ultimately we might enforce this convention.
Note that this feature implements _run time_ import resolution. Ultimately we should move the import logic into an asset preprocessor once Bevy gets support for that.
## Decouple Mesh Logic from PBR Logic via MeshRenderPlugin
This breaks out mesh rendering code from PBR material code, which improves the legibility of the code, decouples mesh logic from PBR logic, and opens the door for a future `MaterialPlugin<T: Material>` that handles all of the pipeline setup for arbitrary shader materials.
## Removed `RenderAsset<Shader>` in favor of extracting shaders into RenderPipelineCache
This simplifies the shader import implementation and removes the need to pass around `RenderAssets<Shader>`.
## RenderCommands are now fallible
This allows us to cleanly handle pipelines+shaders not being ready yet. We can abort a render command early in these cases, preventing bevy from trying to bind group / do draw calls for pipelines that couldn't be bound. This could also be used in the future for things like "components not existing on entities yet".
# Next Steps
* Investigate using Naga for "partial typed imports" (ex: `#import bevy_pbr::material::StandardMaterial`, which would import only the StandardMaterial struct)
* Implement `MaterialPlugin<T: Material>` for low-boilerplate custom material shaders
* Move shader import logic into the asset preprocessor once bevy gets support for that.
Fixes #3132
2021-11-18 03:45:02 +00:00
|
|
|
pub flags: u32,
|
|
|
|
}
|
|
|
|
|
Reduce the size of MeshUniform to improve performance (#9416)
# Objective
- Significantly reduce the size of MeshUniform by only including
necessary data.
## Solution
Local to world, model transforms are affine. This means they only need a
4x3 matrix to represent them.
`MeshUniform` stores the current, and previous model transforms, and the
inverse transpose of the current model transform, all as 4x4 matrices.
Instead we can store the current, and previous model transforms as 4x3
matrices, and we only need the upper-left 3x3 part of the inverse
transpose of the current model transform. This change allows us to
reduce the serialized MeshUniform size from 208 bytes to 144 bytes,
which is over a 30% saving in data to serialize, and VRAM bandwidth and
space.
## Benchmarks
On an M1 Max, running `many_cubes -- sphere`, main is in yellow, this PR
is in red:
<img width="1484" alt="Screenshot 2023-08-11 at 02 36 43"
src="https://github.com/bevyengine/bevy/assets/302146/7d99c7b3-f2bb-4004-a8d0-4c00f755cb0d">
A reduction in frame time of ~14%.
---
## Changelog
- Changed: Redefined `MeshUniform` to improve performance by using 4x3
affine transforms and reconstructing 4x4 matrices in the shader. Helper
functions were added to `bevy_pbr::mesh_functions` to unpack the data.
`affine_to_square` converts the packed 4x3 in 3x4 matrix data to a 4x4
matrix. `mat2x4_f32_to_mat3x3` converts the 3x3 in mat2x4 + f32 matrix
data back into a 3x3.
## Migration Guide
Shader code before:
```
var model = mesh[instance_index].model;
```
Shader code after:
```
#import bevy_pbr::mesh_functions affine_to_square
var model = affine_to_square(mesh[instance_index].model);
```
2023-08-15 06:00:23 +00:00
|
|
|
impl From<&MeshTransforms> for MeshUniform {
|
|
|
|
fn from(mesh_transforms: &MeshTransforms) -> Self {
|
|
|
|
let transpose_model_3x3 = mesh_transforms.transform.matrix3.transpose();
|
|
|
|
let transpose_previous_model_3x3 = mesh_transforms.previous_transform.matrix3.transpose();
|
|
|
|
let inverse_transpose_model_3x3 = Affine3A::from(&mesh_transforms.transform)
|
|
|
|
.inverse()
|
|
|
|
.matrix3
|
|
|
|
.transpose();
|
|
|
|
Self {
|
|
|
|
transform: [
|
|
|
|
transpose_model_3x3
|
|
|
|
.x_axis
|
|
|
|
.extend(mesh_transforms.transform.translation.x),
|
|
|
|
transpose_model_3x3
|
|
|
|
.y_axis
|
|
|
|
.extend(mesh_transforms.transform.translation.y),
|
|
|
|
transpose_model_3x3
|
|
|
|
.z_axis
|
|
|
|
.extend(mesh_transforms.transform.translation.z),
|
|
|
|
],
|
|
|
|
previous_transform: [
|
|
|
|
transpose_previous_model_3x3
|
|
|
|
.x_axis
|
|
|
|
.extend(mesh_transforms.previous_transform.translation.x),
|
|
|
|
transpose_previous_model_3x3
|
|
|
|
.y_axis
|
|
|
|
.extend(mesh_transforms.previous_transform.translation.y),
|
|
|
|
transpose_previous_model_3x3
|
|
|
|
.z_axis
|
|
|
|
.extend(mesh_transforms.previous_transform.translation.z),
|
|
|
|
],
|
|
|
|
inverse_transpose_model_a: [
|
|
|
|
(
|
|
|
|
inverse_transpose_model_3x3.x_axis,
|
|
|
|
inverse_transpose_model_3x3.y_axis.x,
|
|
|
|
)
|
|
|
|
.into(),
|
|
|
|
(
|
|
|
|
inverse_transpose_model_3x3.y_axis.yz(),
|
|
|
|
inverse_transpose_model_3x3.z_axis.xy(),
|
|
|
|
)
|
|
|
|
.into(),
|
|
|
|
],
|
|
|
|
inverse_transpose_model_b: inverse_transpose_model_3x3.z_axis.z,
|
|
|
|
flags: mesh_transforms.flags,
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2022-09-27 17:51:12 +00:00
|
|
|
// NOTE: These must match the bit flags in bevy_pbr/src/render/mesh_types.wgsl!
|
Shader Imports. Decouple Mesh logic from PBR (#3137)
## Shader Imports
This adds "whole file" shader imports. These come in two flavors:
### Asset Path Imports
```rust
// /assets/shaders/custom.wgsl
#import "shaders/custom_material.wgsl"
[[stage(fragment)]]
fn fragment() -> [[location(0)]] vec4<f32> {
return get_color();
}
```
```rust
// /assets/shaders/custom_material.wgsl
[[block]]
struct CustomMaterial {
color: vec4<f32>;
};
[[group(1), binding(0)]]
var<uniform> material: CustomMaterial;
```
### Custom Path Imports
Enables defining custom import paths. These are intended to be used by crates to export shader functionality:
```rust
// bevy_pbr2/src/render/pbr.wgsl
#import bevy_pbr::mesh_view_bind_group
#import bevy_pbr::mesh_bind_group
[[block]]
struct StandardMaterial {
base_color: vec4<f32>;
emissive: vec4<f32>;
perceptual_roughness: f32;
metallic: f32;
reflectance: f32;
flags: u32;
};
/* rest of PBR fragment shader here */
```
```rust
impl Plugin for MeshRenderPlugin {
fn build(&self, app: &mut bevy_app::App) {
let mut shaders = app.world.get_resource_mut::<Assets<Shader>>().unwrap();
shaders.set_untracked(
MESH_BIND_GROUP_HANDLE,
Shader::from_wgsl(include_str!("mesh_bind_group.wgsl"))
.with_import_path("bevy_pbr::mesh_bind_group"),
);
shaders.set_untracked(
MESH_VIEW_BIND_GROUP_HANDLE,
Shader::from_wgsl(include_str!("mesh_view_bind_group.wgsl"))
.with_import_path("bevy_pbr::mesh_view_bind_group"),
);
```
By convention these should use rust-style module paths that start with the crate name. Ultimately we might enforce this convention.
Note that this feature implements _run time_ import resolution. Ultimately we should move the import logic into an asset preprocessor once Bevy gets support for that.
## Decouple Mesh Logic from PBR Logic via MeshRenderPlugin
This breaks out mesh rendering code from PBR material code, which improves the legibility of the code, decouples mesh logic from PBR logic, and opens the door for a future `MaterialPlugin<T: Material>` that handles all of the pipeline setup for arbitrary shader materials.
## Removed `RenderAsset<Shader>` in favor of extracting shaders into RenderPipelineCache
This simplifies the shader import implementation and removes the need to pass around `RenderAssets<Shader>`.
## RenderCommands are now fallible
This allows us to cleanly handle pipelines+shaders not being ready yet. We can abort a render command early in these cases, preventing bevy from trying to bind group / do draw calls for pipelines that couldn't be bound. This could also be used in the future for things like "components not existing on entities yet".
# Next Steps
* Investigate using Naga for "partial typed imports" (ex: `#import bevy_pbr::material::StandardMaterial`, which would import only the StandardMaterial struct)
* Implement `MaterialPlugin<T: Material>` for low-boilerplate custom material shaders
* Move shader import logic into the asset preprocessor once bevy gets support for that.
Fixes #3132
2021-11-18 03:45:02 +00:00
|
|
|
bitflags::bitflags! {
|
|
|
|
#[repr(transparent)]
|
|
|
|
struct MeshFlags: u32 {
|
|
|
|
const SHADOW_RECEIVER = (1 << 0);
|
2022-08-18 21:54:40 +00:00
|
|
|
// Indicates the sign of the determinant of the 3x3 model matrix. If the sign is positive,
|
|
|
|
// then the flag should be set, else it should not be set.
|
|
|
|
const SIGN_DETERMINANT_MODEL_3X3 = (1 << 31);
|
Shader Imports. Decouple Mesh logic from PBR (#3137)
## Shader Imports
This adds "whole file" shader imports. These come in two flavors:
### Asset Path Imports
```rust
// /assets/shaders/custom.wgsl
#import "shaders/custom_material.wgsl"
[[stage(fragment)]]
fn fragment() -> [[location(0)]] vec4<f32> {
return get_color();
}
```
```rust
// /assets/shaders/custom_material.wgsl
[[block]]
struct CustomMaterial {
color: vec4<f32>;
};
[[group(1), binding(0)]]
var<uniform> material: CustomMaterial;
```
### Custom Path Imports
Enables defining custom import paths. These are intended to be used by crates to export shader functionality:
```rust
// bevy_pbr2/src/render/pbr.wgsl
#import bevy_pbr::mesh_view_bind_group
#import bevy_pbr::mesh_bind_group
[[block]]
struct StandardMaterial {
base_color: vec4<f32>;
emissive: vec4<f32>;
perceptual_roughness: f32;
metallic: f32;
reflectance: f32;
flags: u32;
};
/* rest of PBR fragment shader here */
```
```rust
impl Plugin for MeshRenderPlugin {
fn build(&self, app: &mut bevy_app::App) {
let mut shaders = app.world.get_resource_mut::<Assets<Shader>>().unwrap();
shaders.set_untracked(
MESH_BIND_GROUP_HANDLE,
Shader::from_wgsl(include_str!("mesh_bind_group.wgsl"))
.with_import_path("bevy_pbr::mesh_bind_group"),
);
shaders.set_untracked(
MESH_VIEW_BIND_GROUP_HANDLE,
Shader::from_wgsl(include_str!("mesh_view_bind_group.wgsl"))
.with_import_path("bevy_pbr::mesh_view_bind_group"),
);
```
By convention these should use rust-style module paths that start with the crate name. Ultimately we might enforce this convention.
Note that this feature implements _run time_ import resolution. Ultimately we should move the import logic into an asset preprocessor once Bevy gets support for that.
## Decouple Mesh Logic from PBR Logic via MeshRenderPlugin
This breaks out mesh rendering code from PBR material code, which improves the legibility of the code, decouples mesh logic from PBR logic, and opens the door for a future `MaterialPlugin<T: Material>` that handles all of the pipeline setup for arbitrary shader materials.
## Removed `RenderAsset<Shader>` in favor of extracting shaders into RenderPipelineCache
This simplifies the shader import implementation and removes the need to pass around `RenderAssets<Shader>`.
## RenderCommands are now fallible
This allows us to cleanly handle pipelines+shaders not being ready yet. We can abort a render command early in these cases, preventing bevy from trying to bind group / do draw calls for pipelines that couldn't be bound. This could also be used in the future for things like "components not existing on entities yet".
# Next Steps
* Investigate using Naga for "partial typed imports" (ex: `#import bevy_pbr::material::StandardMaterial`, which would import only the StandardMaterial struct)
* Implement `MaterialPlugin<T: Material>` for low-boilerplate custom material shaders
* Move shader import logic into the asset preprocessor once bevy gets support for that.
Fixes #3132
2021-11-18 03:45:02 +00:00
|
|
|
const NONE = 0;
|
|
|
|
const UNINITIALIZED = 0xFFFF;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
pub fn extract_meshes(
|
|
|
|
mut commands: Commands,
|
2022-07-04 12:44:23 +00:00
|
|
|
mut prev_caster_commands_len: Local<usize>,
|
|
|
|
mut prev_not_caster_commands_len: Local<usize>,
|
Make `RenderStage::Extract` run on the render world (#4402)
# Objective
- Currently, the `Extract` `RenderStage` is executed on the main world, with the render world available as a resource.
- However, when needing access to resources in the render world (e.g. to mutate them), the only way to do so was to get exclusive access to the whole `RenderWorld` resource.
- This meant that effectively only one extract which wrote to resources could run at a time.
- We didn't previously make `Extract`ing writing to the world a non-happy path, even though we want to discourage that.
## Solution
- Move the extract stage to run on the render world.
- Add the main world as a `MainWorld` resource.
- Add an `Extract` `SystemParam` as a convenience to access a (read only) `SystemParam` in the main world during `Extract`.
## Future work
It should be possible to avoid needing to use `get_or_spawn` for the render commands, since now the `Commands`' `Entities` matches up with the world being executed on.
We need to determine how this interacts with https://github.com/bevyengine/bevy/pull/3519
It's theoretically possible to remove the need for the `value` method on `Extract`. However, that requires slightly changing the `SystemParam` interface, which would make it more complicated. That would probably mess up the `SystemState` api too.
## Todo
I still need to add doc comments to `Extract`.
---
## Changelog
### Changed
- The `Extract` `RenderStage` now runs on the render world (instead of the main world as before).
You must use the `Extract` `SystemParam` to access the main world during the extract phase.
Resources on the render world can now be accessed using `ResMut` during extract.
### Removed
- `Commands::spawn_and_forget`. Use `Commands::get_or_spawn(e).insert_bundle(bundle)` instead
## Migration Guide
The `Extract` `RenderStage` now runs on the render world (instead of the main world as before).
You must use the `Extract` `SystemParam` to access the main world during the extract phase. `Extract` takes a single type parameter, which is any system parameter (such as `Res`, `Query` etc.). It will extract this from the main world, and returns the result of this extraction when `value` is called on it.
For example, if previously your extract system looked like:
```rust
fn extract_clouds(mut commands: Commands, clouds: Query<Entity, With<Cloud>>) {
for cloud in clouds.iter() {
commands.get_or_spawn(cloud).insert(Cloud);
}
}
```
the new version would be:
```rust
fn extract_clouds(mut commands: Commands, mut clouds: Extract<Query<Entity, With<Cloud>>>) {
for cloud in clouds.value().iter() {
commands.get_or_spawn(cloud).insert(Cloud);
}
}
```
The diff is:
```diff
--- a/src/clouds.rs
+++ b/src/clouds.rs
@@ -1,5 +1,5 @@
-fn extract_clouds(mut commands: Commands, clouds: Query<Entity, With<Cloud>>) {
- for cloud in clouds.iter() {
+fn extract_clouds(mut commands: Commands, mut clouds: Extract<Query<Entity, With<Cloud>>>) {
+ for cloud in clouds.value().iter() {
commands.get_or_spawn(cloud).insert(Cloud);
}
}
```
You can now also access resources from the render world using the normal system parameters during `Extract`:
```rust
fn extract_assets(mut render_assets: ResMut<MyAssets>, source_assets: Extract<Res<MyAssets>>) {
*render_assets = source_assets.clone();
}
```
Please note that all existing extract systems need to be updated to match this new style; even if they currently compile they will not run as expected. A warning will be emitted on a best-effort basis if this is not met.
Co-authored-by: Carter Anderson <mcanders1@gmail.com>
2022-07-08 23:56:33 +00:00
|
|
|
meshes_query: Extract<
|
|
|
|
Query<(
|
|
|
|
Entity,
|
|
|
|
&ComputedVisibility,
|
|
|
|
&GlobalTransform,
|
Temporal Antialiasing (TAA) (#7291)
![image](https://user-images.githubusercontent.com/47158642/214374911-412f0986-3927-4f7a-9a6c-413bdee6b389.png)
# Objective
- Implement an alternative antialias technique
- TAA scales based off of view resolution, not geometry complexity
- TAA filters textures, firefly pixels, and other aliasing not covered
by MSAA
- TAA additionally will reduce noise / increase quality in future
stochastic rendering techniques
- Closes https://github.com/bevyengine/bevy/issues/3663
## Solution
- Add a temporal jitter component
- Add a motion vector prepass
- Add a TemporalAntialias component and plugin
- Combine existing MSAA and FXAA examples and add TAA
## Followup Work
- Prepass motion vector support for skinned meshes
- Move uniforms needed for motion vectors into a separate bind group,
instead of using different bind group layouts
- Reuse previous frame's GPU view buffer for motion vectors, instead of
recomputing
- Mip biasing for sharper textures, and or unjitter texture UVs
https://github.com/bevyengine/bevy/issues/7323
- Compute shader for better performance
- Investigate FSR techniques
- Historical depth based disocclusion tests, for geometry disocclusion
- Historical luminance/hue based tests, for shading disocclusion
- Pixel "locks" to reduce blending rate / revamp history confidence
mechanism
- Orthographic camera support for TemporalJitter
- Figure out COD's 1-tap bicubic filter
---
## Changelog
- Added MotionVectorPrepass and TemporalJitter
- Added TemporalAntialiasPlugin, TemporalAntialiasBundle, and
TemporalAntialiasSettings
---------
Co-authored-by: IceSentry <c.giguere42@gmail.com>
Co-authored-by: IceSentry <IceSentry@users.noreply.github.com>
Co-authored-by: Robert Swain <robert.swain@gmail.com>
Co-authored-by: Daniel Chia <danstryder@gmail.com>
Co-authored-by: robtfm <50659922+robtfm@users.noreply.github.com>
Co-authored-by: Brandon Dyer <brandondyer64@gmail.com>
Co-authored-by: Edgar Geier <geieredgar@gmail.com>
2023-03-27 22:22:40 +00:00
|
|
|
Option<&PreviousGlobalTransform>,
|
Make `RenderStage::Extract` run on the render world (#4402)
# Objective
- Currently, the `Extract` `RenderStage` is executed on the main world, with the render world available as a resource.
- However, when needing access to resources in the render world (e.g. to mutate them), the only way to do so was to get exclusive access to the whole `RenderWorld` resource.
- This meant that effectively only one extract which wrote to resources could run at a time.
- We didn't previously make `Extract`ing writing to the world a non-happy path, even though we want to discourage that.
## Solution
- Move the extract stage to run on the render world.
- Add the main world as a `MainWorld` resource.
- Add an `Extract` `SystemParam` as a convenience to access a (read only) `SystemParam` in the main world during `Extract`.
## Future work
It should be possible to avoid needing to use `get_or_spawn` for the render commands, since now the `Commands`' `Entities` matches up with the world being executed on.
We need to determine how this interacts with https://github.com/bevyengine/bevy/pull/3519
It's theoretically possible to remove the need for the `value` method on `Extract`. However, that requires slightly changing the `SystemParam` interface, which would make it more complicated. That would probably mess up the `SystemState` api too.
## Todo
I still need to add doc comments to `Extract`.
---
## Changelog
### Changed
- The `Extract` `RenderStage` now runs on the render world (instead of the main world as before).
You must use the `Extract` `SystemParam` to access the main world during the extract phase.
Resources on the render world can now be accessed using `ResMut` during extract.
### Removed
- `Commands::spawn_and_forget`. Use `Commands::get_or_spawn(e).insert_bundle(bundle)` instead
## Migration Guide
The `Extract` `RenderStage` now runs on the render world (instead of the main world as before).
You must use the `Extract` `SystemParam` to access the main world during the extract phase. `Extract` takes a single type parameter, which is any system parameter (such as `Res`, `Query` etc.). It will extract this from the main world, and returns the result of this extraction when `value` is called on it.
For example, if previously your extract system looked like:
```rust
fn extract_clouds(mut commands: Commands, clouds: Query<Entity, With<Cloud>>) {
for cloud in clouds.iter() {
commands.get_or_spawn(cloud).insert(Cloud);
}
}
```
the new version would be:
```rust
fn extract_clouds(mut commands: Commands, mut clouds: Extract<Query<Entity, With<Cloud>>>) {
for cloud in clouds.value().iter() {
commands.get_or_spawn(cloud).insert(Cloud);
}
}
```
The diff is:
```diff
--- a/src/clouds.rs
+++ b/src/clouds.rs
@@ -1,5 +1,5 @@
-fn extract_clouds(mut commands: Commands, clouds: Query<Entity, With<Cloud>>) {
- for cloud in clouds.iter() {
+fn extract_clouds(mut commands: Commands, mut clouds: Extract<Query<Entity, With<Cloud>>>) {
+ for cloud in clouds.value().iter() {
commands.get_or_spawn(cloud).insert(Cloud);
}
}
```
You can now also access resources from the render world using the normal system parameters during `Extract`:
```rust
fn extract_assets(mut render_assets: ResMut<MyAssets>, source_assets: Extract<Res<MyAssets>>) {
*render_assets = source_assets.clone();
}
```
Please note that all existing extract systems need to be updated to match this new style; even if they currently compile they will not run as expected. A warning will be emitted on a best-effort basis if this is not met.
Co-authored-by: Carter Anderson <mcanders1@gmail.com>
2022-07-08 23:56:33 +00:00
|
|
|
&Handle<Mesh>,
|
|
|
|
Option<With<NotShadowReceiver>>,
|
|
|
|
Option<With<NotShadowCaster>>,
|
|
|
|
)>,
|
|
|
|
>,
|
Shader Imports. Decouple Mesh logic from PBR (#3137)
## Shader Imports
This adds "whole file" shader imports. These come in two flavors:
### Asset Path Imports
```rust
// /assets/shaders/custom.wgsl
#import "shaders/custom_material.wgsl"
[[stage(fragment)]]
fn fragment() -> [[location(0)]] vec4<f32> {
return get_color();
}
```
```rust
// /assets/shaders/custom_material.wgsl
[[block]]
struct CustomMaterial {
color: vec4<f32>;
};
[[group(1), binding(0)]]
var<uniform> material: CustomMaterial;
```
### Custom Path Imports
Enables defining custom import paths. These are intended to be used by crates to export shader functionality:
```rust
// bevy_pbr2/src/render/pbr.wgsl
#import bevy_pbr::mesh_view_bind_group
#import bevy_pbr::mesh_bind_group
[[block]]
struct StandardMaterial {
base_color: vec4<f32>;
emissive: vec4<f32>;
perceptual_roughness: f32;
metallic: f32;
reflectance: f32;
flags: u32;
};
/* rest of PBR fragment shader here */
```
```rust
impl Plugin for MeshRenderPlugin {
fn build(&self, app: &mut bevy_app::App) {
let mut shaders = app.world.get_resource_mut::<Assets<Shader>>().unwrap();
shaders.set_untracked(
MESH_BIND_GROUP_HANDLE,
Shader::from_wgsl(include_str!("mesh_bind_group.wgsl"))
.with_import_path("bevy_pbr::mesh_bind_group"),
);
shaders.set_untracked(
MESH_VIEW_BIND_GROUP_HANDLE,
Shader::from_wgsl(include_str!("mesh_view_bind_group.wgsl"))
.with_import_path("bevy_pbr::mesh_view_bind_group"),
);
```
By convention these should use rust-style module paths that start with the crate name. Ultimately we might enforce this convention.
Note that this feature implements _run time_ import resolution. Ultimately we should move the import logic into an asset preprocessor once Bevy gets support for that.
## Decouple Mesh Logic from PBR Logic via MeshRenderPlugin
This breaks out mesh rendering code from PBR material code, which improves the legibility of the code, decouples mesh logic from PBR logic, and opens the door for a future `MaterialPlugin<T: Material>` that handles all of the pipeline setup for arbitrary shader materials.
## Removed `RenderAsset<Shader>` in favor of extracting shaders into RenderPipelineCache
This simplifies the shader import implementation and removes the need to pass around `RenderAssets<Shader>`.
## RenderCommands are now fallible
This allows us to cleanly handle pipelines+shaders not being ready yet. We can abort a render command early in these cases, preventing bevy from trying to bind group / do draw calls for pipelines that couldn't be bound. This could also be used in the future for things like "components not existing on entities yet".
# Next Steps
* Investigate using Naga for "partial typed imports" (ex: `#import bevy_pbr::material::StandardMaterial`, which would import only the StandardMaterial struct)
* Implement `MaterialPlugin<T: Material>` for low-boilerplate custom material shaders
* Move shader import logic into the asset preprocessor once bevy gets support for that.
Fixes #3132
2021-11-18 03:45:02 +00:00
|
|
|
) {
|
2022-07-04 12:44:23 +00:00
|
|
|
let mut caster_commands = Vec::with_capacity(*prev_caster_commands_len);
|
|
|
|
let mut not_caster_commands = Vec::with_capacity(*prev_not_caster_commands_len);
|
Visibilty Inheritance, universal ComputedVisibility and RenderLayers support (#5310)
# Objective
Fixes #4907. Fixes #838. Fixes #5089.
Supersedes #5146. Supersedes #2087. Supersedes #865. Supersedes #5114
Visibility is currently entirely local. Set a parent entity to be invisible, and the children are still visible. This makes it hard for users to hide entire hierarchies of entities.
Additionally, the semantics of `Visibility` vs `ComputedVisibility` are inconsistent across entity types. 3D meshes use `ComputedVisibility` as the "definitive" visibility component, with `Visibility` being just one data source. Sprites just use `Visibility`, which means they can't feed off of `ComputedVisibility` data, such as culling information, RenderLayers, and (added in this pr) visibility inheritance information.
## Solution
Splits `ComputedVisibilty::is_visible` into `ComputedVisibilty::is_visible_in_view` and `ComputedVisibilty::is_visible_in_hierarchy`. For each visible entity, `is_visible_in_hierarchy` is computed by propagating visibility down the hierarchy. The `ComputedVisibility::is_visible()` function combines these two booleans for the canonical "is this entity visible" function.
Additionally, all entities that have `Visibility` now also have `ComputedVisibility`. Sprites, Lights, and UI entities now use `ComputedVisibility` when appropriate.
This means that in addition to visibility inheritance, everything using Visibility now also supports RenderLayers. Notably, Sprites (and other 2d objects) now support `RenderLayers` and work properly across multiple views.
Also note that this does increase the amount of work done per sprite. Bevymark with 100,000 sprites on `main` runs in `0.017612` seconds and this runs in `0.01902`. That is certainly a gap, but I believe the api consistency and extra functionality this buys us is worth it. See [this thread](https://github.com/bevyengine/bevy/pull/5146#issuecomment-1182783452) for more info. Note that #5146 in combination with #5114 _are_ a viable alternative to this PR and _would_ perform better, but that comes at the cost of api inconsistencies and doing visibility calculations in the "wrong" place. The current visibility system does have potential for performance improvements. I would prefer to evolve that one system as a whole rather than doing custom hacks / different behaviors for each feature slice.
Here is a "split screen" example where the left camera uses RenderLayers to filter out the blue sprite.
![image](https://user-images.githubusercontent.com/2694663/178814868-2e9a2173-bf8c-4c79-8815-633899d492c3.png)
Note that this builds directly on #5146 and that @james7132 deserves the credit for the baseline visibility inheritance work. This pr moves the inherited visibility field into `ComputedVisibility`, then does the additional work of porting everything to `ComputedVisibility`. See my [comments here](https://github.com/bevyengine/bevy/pull/5146#issuecomment-1182783452) for rationale.
## Follow up work
* Now that lights use ComputedVisibility, VisibleEntities now includes "visible lights" in the entity list. Functionally not a problem as we use queries to filter the list down in the desired context. But we should consider splitting this out into a separate`VisibleLights` collection for both clarity and performance reasons. And _maybe_ even consider scoping `VisibleEntities` down to `VisibleMeshes`?.
* Investigate alternative sprite rendering impls (in combination with visibility system tweaks) that avoid re-generating a per-view fixedbitset of visible entities every frame, then checking each ExtractedEntity. This is where most of the performance overhead lives. Ex: we could generate ExtractedEntities per-view using the VisibleEntities list, avoiding the need for the bitset.
* Should ComputedVisibility use bitflags under the hood? This would cut down on the size of the component, potentially speed up the `is_visible()` function, and allow us to cheaply expand ComputedVisibility with more data (ex: split out local visibility and parent visibility, add more culling classes, etc).
---
## Changelog
* ComputedVisibility now takes hierarchy visibility into account.
* 2D, UI and Light entities now use the ComputedVisibility component.
## Migration Guide
If you were previously reading `Visibility::is_visible` as the "actual visibility" for sprites or lights, use `ComputedVisibilty::is_visible()` instead:
```rust
// before (0.7)
fn system(query: Query<&Visibility>) {
for visibility in query.iter() {
if visibility.is_visible {
log!("found visible entity");
}
}
}
// after (0.8)
fn system(query: Query<&ComputedVisibility>) {
for visibility in query.iter() {
if visibility.is_visible() {
log!("found visible entity");
}
}
}
```
Co-authored-by: Carter Anderson <mcanders1@gmail.com>
2022-07-15 23:24:42 +00:00
|
|
|
let visible_meshes = meshes_query.iter().filter(|(_, vis, ..)| vis.is_visible());
|
Shader Imports. Decouple Mesh logic from PBR (#3137)
## Shader Imports
This adds "whole file" shader imports. These come in two flavors:
### Asset Path Imports
```rust
// /assets/shaders/custom.wgsl
#import "shaders/custom_material.wgsl"
[[stage(fragment)]]
fn fragment() -> [[location(0)]] vec4<f32> {
return get_color();
}
```
```rust
// /assets/shaders/custom_material.wgsl
[[block]]
struct CustomMaterial {
color: vec4<f32>;
};
[[group(1), binding(0)]]
var<uniform> material: CustomMaterial;
```
### Custom Path Imports
Enables defining custom import paths. These are intended to be used by crates to export shader functionality:
```rust
// bevy_pbr2/src/render/pbr.wgsl
#import bevy_pbr::mesh_view_bind_group
#import bevy_pbr::mesh_bind_group
[[block]]
struct StandardMaterial {
base_color: vec4<f32>;
emissive: vec4<f32>;
perceptual_roughness: f32;
metallic: f32;
reflectance: f32;
flags: u32;
};
/* rest of PBR fragment shader here */
```
```rust
impl Plugin for MeshRenderPlugin {
fn build(&self, app: &mut bevy_app::App) {
let mut shaders = app.world.get_resource_mut::<Assets<Shader>>().unwrap();
shaders.set_untracked(
MESH_BIND_GROUP_HANDLE,
Shader::from_wgsl(include_str!("mesh_bind_group.wgsl"))
.with_import_path("bevy_pbr::mesh_bind_group"),
);
shaders.set_untracked(
MESH_VIEW_BIND_GROUP_HANDLE,
Shader::from_wgsl(include_str!("mesh_view_bind_group.wgsl"))
.with_import_path("bevy_pbr::mesh_view_bind_group"),
);
```
By convention these should use rust-style module paths that start with the crate name. Ultimately we might enforce this convention.
Note that this feature implements _run time_ import resolution. Ultimately we should move the import logic into an asset preprocessor once Bevy gets support for that.
## Decouple Mesh Logic from PBR Logic via MeshRenderPlugin
This breaks out mesh rendering code from PBR material code, which improves the legibility of the code, decouples mesh logic from PBR logic, and opens the door for a future `MaterialPlugin<T: Material>` that handles all of the pipeline setup for arbitrary shader materials.
## Removed `RenderAsset<Shader>` in favor of extracting shaders into RenderPipelineCache
This simplifies the shader import implementation and removes the need to pass around `RenderAssets<Shader>`.
## RenderCommands are now fallible
This allows us to cleanly handle pipelines+shaders not being ready yet. We can abort a render command early in these cases, preventing bevy from trying to bind group / do draw calls for pipelines that couldn't be bound. This could also be used in the future for things like "components not existing on entities yet".
# Next Steps
* Investigate using Naga for "partial typed imports" (ex: `#import bevy_pbr::material::StandardMaterial`, which would import only the StandardMaterial struct)
* Implement `MaterialPlugin<T: Material>` for low-boilerplate custom material shaders
* Move shader import logic into the asset preprocessor once bevy gets support for that.
Fixes #3132
2021-11-18 03:45:02 +00:00
|
|
|
|
Temporal Antialiasing (TAA) (#7291)
![image](https://user-images.githubusercontent.com/47158642/214374911-412f0986-3927-4f7a-9a6c-413bdee6b389.png)
# Objective
- Implement an alternative antialias technique
- TAA scales based off of view resolution, not geometry complexity
- TAA filters textures, firefly pixels, and other aliasing not covered
by MSAA
- TAA additionally will reduce noise / increase quality in future
stochastic rendering techniques
- Closes https://github.com/bevyengine/bevy/issues/3663
## Solution
- Add a temporal jitter component
- Add a motion vector prepass
- Add a TemporalAntialias component and plugin
- Combine existing MSAA and FXAA examples and add TAA
## Followup Work
- Prepass motion vector support for skinned meshes
- Move uniforms needed for motion vectors into a separate bind group,
instead of using different bind group layouts
- Reuse previous frame's GPU view buffer for motion vectors, instead of
recomputing
- Mip biasing for sharper textures, and or unjitter texture UVs
https://github.com/bevyengine/bevy/issues/7323
- Compute shader for better performance
- Investigate FSR techniques
- Historical depth based disocclusion tests, for geometry disocclusion
- Historical luminance/hue based tests, for shading disocclusion
- Pixel "locks" to reduce blending rate / revamp history confidence
mechanism
- Orthographic camera support for TemporalJitter
- Figure out COD's 1-tap bicubic filter
---
## Changelog
- Added MotionVectorPrepass and TemporalJitter
- Added TemporalAntialiasPlugin, TemporalAntialiasBundle, and
TemporalAntialiasSettings
---------
Co-authored-by: IceSentry <c.giguere42@gmail.com>
Co-authored-by: IceSentry <IceSentry@users.noreply.github.com>
Co-authored-by: Robert Swain <robert.swain@gmail.com>
Co-authored-by: Daniel Chia <danstryder@gmail.com>
Co-authored-by: robtfm <50659922+robtfm@users.noreply.github.com>
Co-authored-by: Brandon Dyer <brandondyer64@gmail.com>
Co-authored-by: Edgar Geier <geieredgar@gmail.com>
2023-03-27 22:22:40 +00:00
|
|
|
for (entity, _, transform, previous_transform, handle, not_receiver, not_caster) in
|
|
|
|
visible_meshes
|
|
|
|
{
|
Reduce the size of MeshUniform to improve performance (#9416)
# Objective
- Significantly reduce the size of MeshUniform by only including
necessary data.
## Solution
Local to world, model transforms are affine. This means they only need a
4x3 matrix to represent them.
`MeshUniform` stores the current, and previous model transforms, and the
inverse transpose of the current model transform, all as 4x4 matrices.
Instead we can store the current, and previous model transforms as 4x3
matrices, and we only need the upper-left 3x3 part of the inverse
transpose of the current model transform. This change allows us to
reduce the serialized MeshUniform size from 208 bytes to 144 bytes,
which is over a 30% saving in data to serialize, and VRAM bandwidth and
space.
## Benchmarks
On an M1 Max, running `many_cubes -- sphere`, main is in yellow, this PR
is in red:
<img width="1484" alt="Screenshot 2023-08-11 at 02 36 43"
src="https://github.com/bevyengine/bevy/assets/302146/7d99c7b3-f2bb-4004-a8d0-4c00f755cb0d">
A reduction in frame time of ~14%.
---
## Changelog
- Changed: Redefined `MeshUniform` to improve performance by using 4x3
affine transforms and reconstructing 4x4 matrices in the shader. Helper
functions were added to `bevy_pbr::mesh_functions` to unpack the data.
`affine_to_square` converts the packed 4x3 in 3x4 matrix data to a 4x4
matrix. `mat2x4_f32_to_mat3x3` converts the 3x3 in mat2x4 + f32 matrix
data back into a 3x3.
## Migration Guide
Shader code before:
```
var model = mesh[instance_index].model;
```
Shader code after:
```
#import bevy_pbr::mesh_functions affine_to_square
var model = affine_to_square(mesh[instance_index].model);
```
2023-08-15 06:00:23 +00:00
|
|
|
let transform = transform.affine();
|
Temporal Antialiasing (TAA) (#7291)
![image](https://user-images.githubusercontent.com/47158642/214374911-412f0986-3927-4f7a-9a6c-413bdee6b389.png)
# Objective
- Implement an alternative antialias technique
- TAA scales based off of view resolution, not geometry complexity
- TAA filters textures, firefly pixels, and other aliasing not covered
by MSAA
- TAA additionally will reduce noise / increase quality in future
stochastic rendering techniques
- Closes https://github.com/bevyengine/bevy/issues/3663
## Solution
- Add a temporal jitter component
- Add a motion vector prepass
- Add a TemporalAntialias component and plugin
- Combine existing MSAA and FXAA examples and add TAA
## Followup Work
- Prepass motion vector support for skinned meshes
- Move uniforms needed for motion vectors into a separate bind group,
instead of using different bind group layouts
- Reuse previous frame's GPU view buffer for motion vectors, instead of
recomputing
- Mip biasing for sharper textures, and or unjitter texture UVs
https://github.com/bevyengine/bevy/issues/7323
- Compute shader for better performance
- Investigate FSR techniques
- Historical depth based disocclusion tests, for geometry disocclusion
- Historical luminance/hue based tests, for shading disocclusion
- Pixel "locks" to reduce blending rate / revamp history confidence
mechanism
- Orthographic camera support for TemporalJitter
- Figure out COD's 1-tap bicubic filter
---
## Changelog
- Added MotionVectorPrepass and TemporalJitter
- Added TemporalAntialiasPlugin, TemporalAntialiasBundle, and
TemporalAntialiasSettings
---------
Co-authored-by: IceSentry <c.giguere42@gmail.com>
Co-authored-by: IceSentry <IceSentry@users.noreply.github.com>
Co-authored-by: Robert Swain <robert.swain@gmail.com>
Co-authored-by: Daniel Chia <danstryder@gmail.com>
Co-authored-by: robtfm <50659922+robtfm@users.noreply.github.com>
Co-authored-by: Brandon Dyer <brandondyer64@gmail.com>
Co-authored-by: Edgar Geier <geieredgar@gmail.com>
2023-03-27 22:22:40 +00:00
|
|
|
let previous_transform = previous_transform.map(|t| t.0).unwrap_or(transform);
|
2022-08-18 21:54:40 +00:00
|
|
|
let mut flags = if not_receiver.is_some() {
|
|
|
|
MeshFlags::empty()
|
2022-07-04 12:44:23 +00:00
|
|
|
} else {
|
2022-08-18 21:54:40 +00:00
|
|
|
MeshFlags::SHADOW_RECEIVER
|
2022-07-04 12:44:23 +00:00
|
|
|
};
|
Reduce the size of MeshUniform to improve performance (#9416)
# Objective
- Significantly reduce the size of MeshUniform by only including
necessary data.
## Solution
Local to world, model transforms are affine. This means they only need a
4x3 matrix to represent them.
`MeshUniform` stores the current, and previous model transforms, and the
inverse transpose of the current model transform, all as 4x4 matrices.
Instead we can store the current, and previous model transforms as 4x3
matrices, and we only need the upper-left 3x3 part of the inverse
transpose of the current model transform. This change allows us to
reduce the serialized MeshUniform size from 208 bytes to 144 bytes,
which is over a 30% saving in data to serialize, and VRAM bandwidth and
space.
## Benchmarks
On an M1 Max, running `many_cubes -- sphere`, main is in yellow, this PR
is in red:
<img width="1484" alt="Screenshot 2023-08-11 at 02 36 43"
src="https://github.com/bevyengine/bevy/assets/302146/7d99c7b3-f2bb-4004-a8d0-4c00f755cb0d">
A reduction in frame time of ~14%.
---
## Changelog
- Changed: Redefined `MeshUniform` to improve performance by using 4x3
affine transforms and reconstructing 4x4 matrices in the shader. Helper
functions were added to `bevy_pbr::mesh_functions` to unpack the data.
`affine_to_square` converts the packed 4x3 in 3x4 matrix data to a 4x4
matrix. `mat2x4_f32_to_mat3x3` converts the 3x3 in mat2x4 + f32 matrix
data back into a 3x3.
## Migration Guide
Shader code before:
```
var model = mesh[instance_index].model;
```
Shader code after:
```
#import bevy_pbr::mesh_functions affine_to_square
var model = affine_to_square(mesh[instance_index].model);
```
2023-08-15 06:00:23 +00:00
|
|
|
if transform.matrix3.determinant().is_sign_positive() {
|
2022-08-18 21:54:40 +00:00
|
|
|
flags |= MeshFlags::SIGN_DETERMINANT_MODEL_3X3;
|
|
|
|
}
|
Reduce the size of MeshUniform to improve performance (#9416)
# Objective
- Significantly reduce the size of MeshUniform by only including
necessary data.
## Solution
Local to world, model transforms are affine. This means they only need a
4x3 matrix to represent them.
`MeshUniform` stores the current, and previous model transforms, and the
inverse transpose of the current model transform, all as 4x4 matrices.
Instead we can store the current, and previous model transforms as 4x3
matrices, and we only need the upper-left 3x3 part of the inverse
transpose of the current model transform. This change allows us to
reduce the serialized MeshUniform size from 208 bytes to 144 bytes,
which is over a 30% saving in data to serialize, and VRAM bandwidth and
space.
## Benchmarks
On an M1 Max, running `many_cubes -- sphere`, main is in yellow, this PR
is in red:
<img width="1484" alt="Screenshot 2023-08-11 at 02 36 43"
src="https://github.com/bevyengine/bevy/assets/302146/7d99c7b3-f2bb-4004-a8d0-4c00f755cb0d">
A reduction in frame time of ~14%.
---
## Changelog
- Changed: Redefined `MeshUniform` to improve performance by using 4x3
affine transforms and reconstructing 4x4 matrices in the shader. Helper
functions were added to `bevy_pbr::mesh_functions` to unpack the data.
`affine_to_square` converts the packed 4x3 in 3x4 matrix data to a 4x4
matrix. `mat2x4_f32_to_mat3x3` converts the 3x3 in mat2x4 + f32 matrix
data back into a 3x3.
## Migration Guide
Shader code before:
```
var model = mesh[instance_index].model;
```
Shader code after:
```
#import bevy_pbr::mesh_functions affine_to_square
var model = affine_to_square(mesh[instance_index].model);
```
2023-08-15 06:00:23 +00:00
|
|
|
let transforms = MeshTransforms {
|
|
|
|
transform: (&transform).into(),
|
|
|
|
previous_transform: (&previous_transform).into(),
|
2023-06-01 08:41:42 +00:00
|
|
|
flags: flags.bits(),
|
2022-07-04 12:44:23 +00:00
|
|
|
};
|
|
|
|
if not_caster.is_some() {
|
Reduce the size of MeshUniform to improve performance (#9416)
# Objective
- Significantly reduce the size of MeshUniform by only including
necessary data.
## Solution
Local to world, model transforms are affine. This means they only need a
4x3 matrix to represent them.
`MeshUniform` stores the current, and previous model transforms, and the
inverse transpose of the current model transform, all as 4x4 matrices.
Instead we can store the current, and previous model transforms as 4x3
matrices, and we only need the upper-left 3x3 part of the inverse
transpose of the current model transform. This change allows us to
reduce the serialized MeshUniform size from 208 bytes to 144 bytes,
which is over a 30% saving in data to serialize, and VRAM bandwidth and
space.
## Benchmarks
On an M1 Max, running `many_cubes -- sphere`, main is in yellow, this PR
is in red:
<img width="1484" alt="Screenshot 2023-08-11 at 02 36 43"
src="https://github.com/bevyengine/bevy/assets/302146/7d99c7b3-f2bb-4004-a8d0-4c00f755cb0d">
A reduction in frame time of ~14%.
---
## Changelog
- Changed: Redefined `MeshUniform` to improve performance by using 4x3
affine transforms and reconstructing 4x4 matrices in the shader. Helper
functions were added to `bevy_pbr::mesh_functions` to unpack the data.
`affine_to_square` converts the packed 4x3 in 3x4 matrix data to a 4x4
matrix. `mat2x4_f32_to_mat3x3` converts the 3x3 in mat2x4 + f32 matrix
data back into a 3x3.
## Migration Guide
Shader code before:
```
var model = mesh[instance_index].model;
```
Shader code after:
```
#import bevy_pbr::mesh_functions affine_to_square
var model = affine_to_square(mesh[instance_index].model);
```
2023-08-15 06:00:23 +00:00
|
|
|
not_caster_commands.push((entity, (handle.clone_weak(), transforms, NotShadowCaster)));
|
2022-07-04 12:44:23 +00:00
|
|
|
} else {
|
Reduce the size of MeshUniform to improve performance (#9416)
# Objective
- Significantly reduce the size of MeshUniform by only including
necessary data.
## Solution
Local to world, model transforms are affine. This means they only need a
4x3 matrix to represent them.
`MeshUniform` stores the current, and previous model transforms, and the
inverse transpose of the current model transform, all as 4x4 matrices.
Instead we can store the current, and previous model transforms as 4x3
matrices, and we only need the upper-left 3x3 part of the inverse
transpose of the current model transform. This change allows us to
reduce the serialized MeshUniform size from 208 bytes to 144 bytes,
which is over a 30% saving in data to serialize, and VRAM bandwidth and
space.
## Benchmarks
On an M1 Max, running `many_cubes -- sphere`, main is in yellow, this PR
is in red:
<img width="1484" alt="Screenshot 2023-08-11 at 02 36 43"
src="https://github.com/bevyengine/bevy/assets/302146/7d99c7b3-f2bb-4004-a8d0-4c00f755cb0d">
A reduction in frame time of ~14%.
---
## Changelog
- Changed: Redefined `MeshUniform` to improve performance by using 4x3
affine transforms and reconstructing 4x4 matrices in the shader. Helper
functions were added to `bevy_pbr::mesh_functions` to unpack the data.
`affine_to_square` converts the packed 4x3 in 3x4 matrix data to a 4x4
matrix. `mat2x4_f32_to_mat3x3` converts the 3x3 in mat2x4 + f32 matrix
data back into a 3x3.
## Migration Guide
Shader code before:
```
var model = mesh[instance_index].model;
```
Shader code after:
```
#import bevy_pbr::mesh_functions affine_to_square
var model = affine_to_square(mesh[instance_index].model);
```
2023-08-15 06:00:23 +00:00
|
|
|
caster_commands.push((entity, (handle.clone_weak(), transforms)));
|
2022-07-04 12:44:23 +00:00
|
|
|
}
|
Shader Imports. Decouple Mesh logic from PBR (#3137)
## Shader Imports
This adds "whole file" shader imports. These come in two flavors:
### Asset Path Imports
```rust
// /assets/shaders/custom.wgsl
#import "shaders/custom_material.wgsl"
[[stage(fragment)]]
fn fragment() -> [[location(0)]] vec4<f32> {
return get_color();
}
```
```rust
// /assets/shaders/custom_material.wgsl
[[block]]
struct CustomMaterial {
color: vec4<f32>;
};
[[group(1), binding(0)]]
var<uniform> material: CustomMaterial;
```
### Custom Path Imports
Enables defining custom import paths. These are intended to be used by crates to export shader functionality:
```rust
// bevy_pbr2/src/render/pbr.wgsl
#import bevy_pbr::mesh_view_bind_group
#import bevy_pbr::mesh_bind_group
[[block]]
struct StandardMaterial {
base_color: vec4<f32>;
emissive: vec4<f32>;
perceptual_roughness: f32;
metallic: f32;
reflectance: f32;
flags: u32;
};
/* rest of PBR fragment shader here */
```
```rust
impl Plugin for MeshRenderPlugin {
fn build(&self, app: &mut bevy_app::App) {
let mut shaders = app.world.get_resource_mut::<Assets<Shader>>().unwrap();
shaders.set_untracked(
MESH_BIND_GROUP_HANDLE,
Shader::from_wgsl(include_str!("mesh_bind_group.wgsl"))
.with_import_path("bevy_pbr::mesh_bind_group"),
);
shaders.set_untracked(
MESH_VIEW_BIND_GROUP_HANDLE,
Shader::from_wgsl(include_str!("mesh_view_bind_group.wgsl"))
.with_import_path("bevy_pbr::mesh_view_bind_group"),
);
```
By convention these should use rust-style module paths that start with the crate name. Ultimately we might enforce this convention.
Note that this feature implements _run time_ import resolution. Ultimately we should move the import logic into an asset preprocessor once Bevy gets support for that.
## Decouple Mesh Logic from PBR Logic via MeshRenderPlugin
This breaks out mesh rendering code from PBR material code, which improves the legibility of the code, decouples mesh logic from PBR logic, and opens the door for a future `MaterialPlugin<T: Material>` that handles all of the pipeline setup for arbitrary shader materials.
## Removed `RenderAsset<Shader>` in favor of extracting shaders into RenderPipelineCache
This simplifies the shader import implementation and removes the need to pass around `RenderAssets<Shader>`.
## RenderCommands are now fallible
This allows us to cleanly handle pipelines+shaders not being ready yet. We can abort a render command early in these cases, preventing bevy from trying to bind group / do draw calls for pipelines that couldn't be bound. This could also be used in the future for things like "components not existing on entities yet".
# Next Steps
* Investigate using Naga for "partial typed imports" (ex: `#import bevy_pbr::material::StandardMaterial`, which would import only the StandardMaterial struct)
* Implement `MaterialPlugin<T: Material>` for low-boilerplate custom material shaders
* Move shader import logic into the asset preprocessor once bevy gets support for that.
Fixes #3132
2021-11-18 03:45:02 +00:00
|
|
|
}
|
2022-07-04 12:44:23 +00:00
|
|
|
*prev_caster_commands_len = caster_commands.len();
|
|
|
|
*prev_not_caster_commands_len = not_caster_commands.len();
|
|
|
|
commands.insert_or_spawn_batch(caster_commands);
|
|
|
|
commands.insert_or_spawn_batch(not_caster_commands);
|
Shader Imports. Decouple Mesh logic from PBR (#3137)
## Shader Imports
This adds "whole file" shader imports. These come in two flavors:
### Asset Path Imports
```rust
// /assets/shaders/custom.wgsl
#import "shaders/custom_material.wgsl"
[[stage(fragment)]]
fn fragment() -> [[location(0)]] vec4<f32> {
return get_color();
}
```
```rust
// /assets/shaders/custom_material.wgsl
[[block]]
struct CustomMaterial {
color: vec4<f32>;
};
[[group(1), binding(0)]]
var<uniform> material: CustomMaterial;
```
### Custom Path Imports
Enables defining custom import paths. These are intended to be used by crates to export shader functionality:
```rust
// bevy_pbr2/src/render/pbr.wgsl
#import bevy_pbr::mesh_view_bind_group
#import bevy_pbr::mesh_bind_group
[[block]]
struct StandardMaterial {
base_color: vec4<f32>;
emissive: vec4<f32>;
perceptual_roughness: f32;
metallic: f32;
reflectance: f32;
flags: u32;
};
/* rest of PBR fragment shader here */
```
```rust
impl Plugin for MeshRenderPlugin {
fn build(&self, app: &mut bevy_app::App) {
let mut shaders = app.world.get_resource_mut::<Assets<Shader>>().unwrap();
shaders.set_untracked(
MESH_BIND_GROUP_HANDLE,
Shader::from_wgsl(include_str!("mesh_bind_group.wgsl"))
.with_import_path("bevy_pbr::mesh_bind_group"),
);
shaders.set_untracked(
MESH_VIEW_BIND_GROUP_HANDLE,
Shader::from_wgsl(include_str!("mesh_view_bind_group.wgsl"))
.with_import_path("bevy_pbr::mesh_view_bind_group"),
);
```
By convention these should use rust-style module paths that start with the crate name. Ultimately we might enforce this convention.
Note that this feature implements _run time_ import resolution. Ultimately we should move the import logic into an asset preprocessor once Bevy gets support for that.
## Decouple Mesh Logic from PBR Logic via MeshRenderPlugin
This breaks out mesh rendering code from PBR material code, which improves the legibility of the code, decouples mesh logic from PBR logic, and opens the door for a future `MaterialPlugin<T: Material>` that handles all of the pipeline setup for arbitrary shader materials.
## Removed `RenderAsset<Shader>` in favor of extracting shaders into RenderPipelineCache
This simplifies the shader import implementation and removes the need to pass around `RenderAssets<Shader>`.
## RenderCommands are now fallible
This allows us to cleanly handle pipelines+shaders not being ready yet. We can abort a render command early in these cases, preventing bevy from trying to bind group / do draw calls for pipelines that couldn't be bound. This could also be used in the future for things like "components not existing on entities yet".
# Next Steps
* Investigate using Naga for "partial typed imports" (ex: `#import bevy_pbr::material::StandardMaterial`, which would import only the StandardMaterial struct)
* Implement `MaterialPlugin<T: Material>` for low-boilerplate custom material shaders
* Move shader import logic into the asset preprocessor once bevy gets support for that.
Fixes #3132
2021-11-18 03:45:02 +00:00
|
|
|
}
|
|
|
|
|
2022-03-29 18:31:13 +00:00
|
|
|
#[derive(Component)]
|
|
|
|
pub struct SkinnedMeshJoints {
|
|
|
|
pub index: u32,
|
|
|
|
}
|
|
|
|
|
|
|
|
impl SkinnedMeshJoints {
|
|
|
|
#[inline]
|
|
|
|
pub fn build(
|
|
|
|
skin: &SkinnedMesh,
|
|
|
|
inverse_bindposes: &Assets<SkinnedMeshInverseBindposes>,
|
|
|
|
joints: &Query<&GlobalTransform>,
|
2022-12-20 16:17:05 +00:00
|
|
|
buffer: &mut BufferVec<Mat4>,
|
2022-03-29 18:31:13 +00:00
|
|
|
) -> Option<Self> {
|
|
|
|
let inverse_bindposes = inverse_bindposes.get(&skin.inverse_bindposes)?;
|
2022-04-07 16:16:36 +00:00
|
|
|
let start = buffer.len();
|
2022-12-20 16:17:05 +00:00
|
|
|
let target = start + skin.joints.len().min(MAX_JOINTS);
|
|
|
|
buffer.extend(
|
|
|
|
joints
|
|
|
|
.iter_many(&skin.joints)
|
|
|
|
.zip(inverse_bindposes.iter())
|
2023-08-28 16:58:45 +00:00
|
|
|
.take(MAX_JOINTS)
|
2022-12-20 16:17:05 +00:00
|
|
|
.map(|(joint, bindpose)| joint.affine() * *bindpose),
|
|
|
|
);
|
|
|
|
// iter_many will skip any failed fetches. This will cause it to assign the wrong bones,
|
|
|
|
// so just bail by truncating to the start.
|
|
|
|
if buffer.len() != target {
|
|
|
|
buffer.truncate(start);
|
|
|
|
return None;
|
2022-03-29 18:31:13 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
// Pad to 256 byte alignment
|
|
|
|
while buffer.len() % 4 != 0 {
|
|
|
|
buffer.push(Mat4::ZERO);
|
|
|
|
}
|
|
|
|
Some(Self {
|
|
|
|
index: start as u32,
|
|
|
|
})
|
|
|
|
}
|
|
|
|
|
Add morph targets (#8158)
# Objective
- Add morph targets to `bevy_pbr` (closes #5756) & load them from glTF
- Supersedes #3722
- Fixes #6814
[Morph targets][1] (also known as shape interpolation, shape keys, or
blend shapes) allow animating individual vertices with fine grained
controls. This is typically used for facial expressions. By specifying
multiple poses as vertex offset, and providing a set of weight of each
pose, it is possible to define surprisingly realistic transitions
between poses. Blending between multiple poses also allow composition.
Morph targets are part of the [gltf standard][2] and are a feature of
Unity and Unreal, and babylone.js, it is only natural to implement them
in bevy.
## Solution
This implementation of morph targets uses a 3d texture where each pixel
is a component of an animated attribute. Each layer is a different
target. We use a 2d texture for each target, because the number of
attribute×components×animated vertices is expected to always exceed the
maximum pixel row size limit of webGL2. It copies fairly closely the way
skinning is implemented on the CPU side, while on the GPU side, the
shader morph target implementation is a relatively trivial detail.
We add an optional `morph_texture` to the `Mesh` struct. The
`morph_texture` is built through a method that accepts an iterator over
attribute buffers.
The `MorphWeights` component, user-accessible, controls the blend of
poses used by mesh instances (so that multiple copy of the same mesh may
have different weights), all the weights are uploaded to a uniform
buffer of 256 `f32`. We limit to 16 poses per mesh, and a total of 256
poses.
More literature:
* Old babylone.js implementation (vertex attribute-based):
https://www.eternalcoding.com/dev-log-1-morph-targets/
* Babylone.js implementation (similar to ours):
https://www.youtube.com/watch?v=LBPRmGgU0PE
* GPU gems 3:
https://developer.nvidia.com/gpugems/gpugems3/part-i-geometry/chapter-3-directx-10-blend-shapes-breaking-limits
* Development discord thread
https://discord.com/channels/691052431525675048/1083325980615114772
https://user-images.githubusercontent.com/26321040/231181046-3bca2ab2-d4d9-472e-8098-639f1871ce2e.mp4
https://github.com/bevyengine/bevy/assets/26321040/d2a0c544-0ef8-45cf-9f99-8c3792f5a258
## Acknowledgements
* Thanks to `storytold` for sponsoring the feature
* Thanks to `superdump` and `james7132` for guidance and help figuring
out stuff
## Future work
- Handling of less and more attributes (eg: animated uv, animated
arbitrary attributes)
- Dynamic pose allocation (so that zero-weighted poses aren't uploaded
to GPU for example, enables much more total poses)
- Better animation API, see #8357
----
## Changelog
- Add morph targets to bevy meshes
- Support up to 64 poses per mesh of individually up to 116508 vertices,
animation currently strictly limited to the position, normal and tangent
attributes.
- Load a morph target using `Mesh::set_morph_targets`
- Add `VisitMorphTargets` and `VisitMorphAttributes` traits to
`bevy_render`, this allows defining morph targets (a fairly complex and
nested data structure) through iterators (ie: single copy instead of
passing around buffers), see documentation of those traits for details
- Add `MorphWeights` component exported by `bevy_render`
- `MorphWeights` control mesh's morph target weights, blending between
various poses defined as morph targets.
- `MorphWeights` are directly inherited by direct children (single level
of hierarchy) of an entity. This allows controlling several mesh
primitives through a unique entity _as per GLTF spec_.
- Add `MorphTargetNames` component, naming each indices of loaded morph
targets.
- Load morph targets weights and buffers in `bevy_gltf`
- handle morph targets animations in `bevy_animation` (previously, it
was a `warn!` log)
- Add the `MorphStressTest.gltf` asset for morph targets testing, taken
from the glTF samples repo, CC0.
- Add morph target manipulation to `scene_viewer`
- Separate the animation code in `scene_viewer` from the rest of the
code, reducing `#[cfg(feature)]` noise
- Add the `morph_targets.rs` example to show off how to manipulate morph
targets, loading `MorpStressTest.gltf`
## Migration Guide
- (very specialized, unlikely to be touched by 3rd parties)
- `MeshPipeline` now has a single `mesh_layouts` field rather than
separate `mesh_layout` and `skinned_mesh_layout` fields. You should
handle all possible mesh bind group layouts in your implementation
- You should also handle properly the new `MORPH_TARGETS` shader def and
mesh pipeline key. A new function is exposed to make this easier:
`setup_moprh_and_skinning_defs`
- The `MeshBindGroup` is now `MeshBindGroups`, cached bind groups are
now accessed through the `get` method.
[1]: https://en.wikipedia.org/wiki/Morph_target_animation
[2]:
https://registry.khronos.org/glTF/specs/2.0/glTF-2.0.html#morph-targets
---------
Co-authored-by: François <mockersf@gmail.com>
Co-authored-by: Carter Anderson <mcanders1@gmail.com>
2023-06-22 20:00:01 +00:00
|
|
|
/// Updated index to be in address space based on [`SkinnedMeshUniform`] size.
|
2022-03-29 18:31:13 +00:00
|
|
|
pub fn to_buffer_index(mut self) -> Self {
|
|
|
|
self.index *= std::mem::size_of::<Mat4>() as u32;
|
|
|
|
self
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
pub fn extract_skinned_meshes(
|
|
|
|
mut commands: Commands,
|
|
|
|
mut previous_len: Local<usize>,
|
2022-12-20 16:17:05 +00:00
|
|
|
mut uniform: ResMut<SkinnedMeshUniform>,
|
Make `RenderStage::Extract` run on the render world (#4402)
# Objective
- Currently, the `Extract` `RenderStage` is executed on the main world, with the render world available as a resource.
- However, when needing access to resources in the render world (e.g. to mutate them), the only way to do so was to get exclusive access to the whole `RenderWorld` resource.
- This meant that effectively only one extract which wrote to resources could run at a time.
- We didn't previously make `Extract`ing writing to the world a non-happy path, even though we want to discourage that.
## Solution
- Move the extract stage to run on the render world.
- Add the main world as a `MainWorld` resource.
- Add an `Extract` `SystemParam` as a convenience to access a (read only) `SystemParam` in the main world during `Extract`.
## Future work
It should be possible to avoid needing to use `get_or_spawn` for the render commands, since now the `Commands`' `Entities` matches up with the world being executed on.
We need to determine how this interacts with https://github.com/bevyengine/bevy/pull/3519
It's theoretically possible to remove the need for the `value` method on `Extract`. However, that requires slightly changing the `SystemParam` interface, which would make it more complicated. That would probably mess up the `SystemState` api too.
## Todo
I still need to add doc comments to `Extract`.
---
## Changelog
### Changed
- The `Extract` `RenderStage` now runs on the render world (instead of the main world as before).
You must use the `Extract` `SystemParam` to access the main world during the extract phase.
Resources on the render world can now be accessed using `ResMut` during extract.
### Removed
- `Commands::spawn_and_forget`. Use `Commands::get_or_spawn(e).insert_bundle(bundle)` instead
## Migration Guide
The `Extract` `RenderStage` now runs on the render world (instead of the main world as before).
You must use the `Extract` `SystemParam` to access the main world during the extract phase. `Extract` takes a single type parameter, which is any system parameter (such as `Res`, `Query` etc.). It will extract this from the main world, and returns the result of this extraction when `value` is called on it.
For example, if previously your extract system looked like:
```rust
fn extract_clouds(mut commands: Commands, clouds: Query<Entity, With<Cloud>>) {
for cloud in clouds.iter() {
commands.get_or_spawn(cloud).insert(Cloud);
}
}
```
the new version would be:
```rust
fn extract_clouds(mut commands: Commands, mut clouds: Extract<Query<Entity, With<Cloud>>>) {
for cloud in clouds.value().iter() {
commands.get_or_spawn(cloud).insert(Cloud);
}
}
```
The diff is:
```diff
--- a/src/clouds.rs
+++ b/src/clouds.rs
@@ -1,5 +1,5 @@
-fn extract_clouds(mut commands: Commands, clouds: Query<Entity, With<Cloud>>) {
- for cloud in clouds.iter() {
+fn extract_clouds(mut commands: Commands, mut clouds: Extract<Query<Entity, With<Cloud>>>) {
+ for cloud in clouds.value().iter() {
commands.get_or_spawn(cloud).insert(Cloud);
}
}
```
You can now also access resources from the render world using the normal system parameters during `Extract`:
```rust
fn extract_assets(mut render_assets: ResMut<MyAssets>, source_assets: Extract<Res<MyAssets>>) {
*render_assets = source_assets.clone();
}
```
Please note that all existing extract systems need to be updated to match this new style; even if they currently compile they will not run as expected. A warning will be emitted on a best-effort basis if this is not met.
Co-authored-by: Carter Anderson <mcanders1@gmail.com>
2022-07-08 23:56:33 +00:00
|
|
|
query: Extract<Query<(Entity, &ComputedVisibility, &SkinnedMesh)>>,
|
|
|
|
inverse_bindposes: Extract<Res<Assets<SkinnedMeshInverseBindposes>>>,
|
|
|
|
joint_query: Extract<Query<&GlobalTransform>>,
|
2022-03-29 18:31:13 +00:00
|
|
|
) {
|
2022-12-20 16:17:05 +00:00
|
|
|
uniform.buffer.clear();
|
2022-03-29 18:31:13 +00:00
|
|
|
let mut values = Vec::with_capacity(*previous_len);
|
|
|
|
let mut last_start = 0;
|
|
|
|
|
2022-09-20 00:29:10 +00:00
|
|
|
for (entity, computed_visibility, skin) in &query {
|
Visibilty Inheritance, universal ComputedVisibility and RenderLayers support (#5310)
# Objective
Fixes #4907. Fixes #838. Fixes #5089.
Supersedes #5146. Supersedes #2087. Supersedes #865. Supersedes #5114
Visibility is currently entirely local. Set a parent entity to be invisible, and the children are still visible. This makes it hard for users to hide entire hierarchies of entities.
Additionally, the semantics of `Visibility` vs `ComputedVisibility` are inconsistent across entity types. 3D meshes use `ComputedVisibility` as the "definitive" visibility component, with `Visibility` being just one data source. Sprites just use `Visibility`, which means they can't feed off of `ComputedVisibility` data, such as culling information, RenderLayers, and (added in this pr) visibility inheritance information.
## Solution
Splits `ComputedVisibilty::is_visible` into `ComputedVisibilty::is_visible_in_view` and `ComputedVisibilty::is_visible_in_hierarchy`. For each visible entity, `is_visible_in_hierarchy` is computed by propagating visibility down the hierarchy. The `ComputedVisibility::is_visible()` function combines these two booleans for the canonical "is this entity visible" function.
Additionally, all entities that have `Visibility` now also have `ComputedVisibility`. Sprites, Lights, and UI entities now use `ComputedVisibility` when appropriate.
This means that in addition to visibility inheritance, everything using Visibility now also supports RenderLayers. Notably, Sprites (and other 2d objects) now support `RenderLayers` and work properly across multiple views.
Also note that this does increase the amount of work done per sprite. Bevymark with 100,000 sprites on `main` runs in `0.017612` seconds and this runs in `0.01902`. That is certainly a gap, but I believe the api consistency and extra functionality this buys us is worth it. See [this thread](https://github.com/bevyengine/bevy/pull/5146#issuecomment-1182783452) for more info. Note that #5146 in combination with #5114 _are_ a viable alternative to this PR and _would_ perform better, but that comes at the cost of api inconsistencies and doing visibility calculations in the "wrong" place. The current visibility system does have potential for performance improvements. I would prefer to evolve that one system as a whole rather than doing custom hacks / different behaviors for each feature slice.
Here is a "split screen" example where the left camera uses RenderLayers to filter out the blue sprite.
![image](https://user-images.githubusercontent.com/2694663/178814868-2e9a2173-bf8c-4c79-8815-633899d492c3.png)
Note that this builds directly on #5146 and that @james7132 deserves the credit for the baseline visibility inheritance work. This pr moves the inherited visibility field into `ComputedVisibility`, then does the additional work of porting everything to `ComputedVisibility`. See my [comments here](https://github.com/bevyengine/bevy/pull/5146#issuecomment-1182783452) for rationale.
## Follow up work
* Now that lights use ComputedVisibility, VisibleEntities now includes "visible lights" in the entity list. Functionally not a problem as we use queries to filter the list down in the desired context. But we should consider splitting this out into a separate`VisibleLights` collection for both clarity and performance reasons. And _maybe_ even consider scoping `VisibleEntities` down to `VisibleMeshes`?.
* Investigate alternative sprite rendering impls (in combination with visibility system tweaks) that avoid re-generating a per-view fixedbitset of visible entities every frame, then checking each ExtractedEntity. This is where most of the performance overhead lives. Ex: we could generate ExtractedEntities per-view using the VisibleEntities list, avoiding the need for the bitset.
* Should ComputedVisibility use bitflags under the hood? This would cut down on the size of the component, potentially speed up the `is_visible()` function, and allow us to cheaply expand ComputedVisibility with more data (ex: split out local visibility and parent visibility, add more culling classes, etc).
---
## Changelog
* ComputedVisibility now takes hierarchy visibility into account.
* 2D, UI and Light entities now use the ComputedVisibility component.
## Migration Guide
If you were previously reading `Visibility::is_visible` as the "actual visibility" for sprites or lights, use `ComputedVisibilty::is_visible()` instead:
```rust
// before (0.7)
fn system(query: Query<&Visibility>) {
for visibility in query.iter() {
if visibility.is_visible {
log!("found visible entity");
}
}
}
// after (0.8)
fn system(query: Query<&ComputedVisibility>) {
for visibility in query.iter() {
if visibility.is_visible() {
log!("found visible entity");
}
}
}
```
Co-authored-by: Carter Anderson <mcanders1@gmail.com>
2022-07-15 23:24:42 +00:00
|
|
|
if !computed_visibility.is_visible() {
|
2022-03-29 18:31:13 +00:00
|
|
|
continue;
|
|
|
|
}
|
|
|
|
// PERF: This can be expensive, can we move this to prepare?
|
|
|
|
if let Some(skinned_joints) =
|
2022-12-20 16:17:05 +00:00
|
|
|
SkinnedMeshJoints::build(skin, &inverse_bindposes, &joint_query, &mut uniform.buffer)
|
2022-03-29 18:31:13 +00:00
|
|
|
{
|
|
|
|
last_start = last_start.max(skinned_joints.index as usize);
|
2022-10-29 18:15:28 +00:00
|
|
|
values.push((entity, skinned_joints.to_buffer_index()));
|
2022-03-29 18:31:13 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
// Pad out the buffer to ensure that there's enough space for bindings
|
2022-12-20 16:17:05 +00:00
|
|
|
while uniform.buffer.len() - last_start < MAX_JOINTS {
|
|
|
|
uniform.buffer.push(Mat4::ZERO);
|
2022-03-29 18:31:13 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
*previous_len = values.len();
|
|
|
|
commands.insert_or_spawn_batch(values);
|
|
|
|
}
|
|
|
|
|
Reorder render sets, refactor bevy_sprite to take advantage (#9236)
This is a continuation of this PR: #8062
# Objective
- Reorder render schedule sets to allow data preparation when phase item
order is known to support improved batching
- Part of the batching/instancing etc plan from here:
https://github.com/bevyengine/bevy/issues/89#issuecomment-1379249074
- The original idea came from @inodentry and proved to be a good one.
Thanks!
- Refactor `bevy_sprite` and `bevy_ui` to take advantage of the new
ordering
## Solution
- Move `Prepare` and `PrepareFlush` after `PhaseSortFlush`
- Add a `PrepareAssets` set that runs in parallel with other systems and
sets in the render schedule.
- Put prepare_assets systems in the `PrepareAssets` set
- If explicit dependencies are needed on Mesh or Material RenderAssets
then depend on the appropriate system.
- Add `ManageViews` and `ManageViewsFlush` sets between
`ExtractCommands` and Queue
- Move `queue_mesh*_bind_group` to the Prepare stage
- Rename them to `prepare_`
- Put systems that prepare resources (buffers, textures, etc.) into a
`PrepareResources` set inside `Prepare`
- Put the `prepare_..._bind_group` systems into a `PrepareBindGroup` set
after `PrepareResources`
- Move `prepare_lights` to the `ManageViews` set
- `prepare_lights` creates views and this must happen before `Queue`
- This system needs refactoring to stop handling all responsibilities
- Gather lights, sort, and create shadow map views. Store sorted light
entities in a resource
- Remove `BatchedPhaseItem`
- Replace `batch_range` with `batch_size` representing how many items to
skip after rendering the item or to skip the item entirely if
`batch_size` is 0.
- `queue_sprites` has been split into `queue_sprites` for queueing phase
items and `prepare_sprites` for batching after the `PhaseSort`
- `PhaseItem`s are still inserted in `queue_sprites`
- After sorting adjacent compatible sprite phase items are accumulated
into `SpriteBatch` components on the first entity of each batch,
containing a range of vertex indices. The associated `PhaseItem`'s
`batch_size` is updated appropriately.
- `SpriteBatch` items are then drawn skipping over the other items in
the batch based on the value in `batch_size`
- A very similar refactor was performed on `bevy_ui`
---
## Changelog
Changed:
- Reordered and reworked render app schedule sets. The main change is
that data is extracted, queued, sorted, and then prepared when the order
of data is known.
- Refactor `bevy_sprite` and `bevy_ui` to take advantage of the
reordering.
## Migration Guide
- Assets such as materials and meshes should now be created in
`PrepareAssets` e.g. `prepare_assets<Mesh>`
- Queueing entities to `RenderPhase`s continues to be done in `Queue`
e.g. `queue_sprites`
- Preparing resources (textures, buffers, etc.) should now be done in
`PrepareResources`, e.g. `prepare_prepass_textures`,
`prepare_mesh_uniforms`
- Prepare bind groups should now be done in `PrepareBindGroups` e.g.
`prepare_mesh_bind_group`
- Any batching or instancing can now be done in `Prepare` where the
order of the phase items is known e.g. `prepare_sprites`
## Next Steps
- Introduce some generic mechanism to ensure items that can be batched
are grouped in the phase item order, currently you could easily have
`[sprite at z 0, mesh at z 0, sprite at z 0]` preventing batching.
- Investigate improved orderings for building the MeshUniform buffer
- Implementing batching across the rest of bevy
---------
Co-authored-by: Robert Swain <robert.swain@gmail.com>
Co-authored-by: robtfm <50659922+robtfm@users.noreply.github.com>
2023-08-27 14:33:49 +00:00
|
|
|
#[allow(clippy::too_many_arguments)]
|
|
|
|
pub fn prepare_mesh_uniforms(
|
|
|
|
mut seen: Local<FixedBitSet>,
|
Reduce the size of MeshUniform to improve performance (#9416)
# Objective
- Significantly reduce the size of MeshUniform by only including
necessary data.
## Solution
Local to world, model transforms are affine. This means they only need a
4x3 matrix to represent them.
`MeshUniform` stores the current, and previous model transforms, and the
inverse transpose of the current model transform, all as 4x4 matrices.
Instead we can store the current, and previous model transforms as 4x3
matrices, and we only need the upper-left 3x3 part of the inverse
transpose of the current model transform. This change allows us to
reduce the serialized MeshUniform size from 208 bytes to 144 bytes,
which is over a 30% saving in data to serialize, and VRAM bandwidth and
space.
## Benchmarks
On an M1 Max, running `many_cubes -- sphere`, main is in yellow, this PR
is in red:
<img width="1484" alt="Screenshot 2023-08-11 at 02 36 43"
src="https://github.com/bevyengine/bevy/assets/302146/7d99c7b3-f2bb-4004-a8d0-4c00f755cb0d">
A reduction in frame time of ~14%.
---
## Changelog
- Changed: Redefined `MeshUniform` to improve performance by using 4x3
affine transforms and reconstructing 4x4 matrices in the shader. Helper
functions were added to `bevy_pbr::mesh_functions` to unpack the data.
`affine_to_square` converts the packed 4x3 in 3x4 matrix data to a 4x4
matrix. `mat2x4_f32_to_mat3x3` converts the 3x3 in mat2x4 + f32 matrix
data back into a 3x3.
## Migration Guide
Shader code before:
```
var model = mesh[instance_index].model;
```
Shader code after:
```
#import bevy_pbr::mesh_functions affine_to_square
var model = affine_to_square(mesh[instance_index].model);
```
2023-08-15 06:00:23 +00:00
|
|
|
mut commands: Commands,
|
Reorder render sets, refactor bevy_sprite to take advantage (#9236)
This is a continuation of this PR: #8062
# Objective
- Reorder render schedule sets to allow data preparation when phase item
order is known to support improved batching
- Part of the batching/instancing etc plan from here:
https://github.com/bevyengine/bevy/issues/89#issuecomment-1379249074
- The original idea came from @inodentry and proved to be a good one.
Thanks!
- Refactor `bevy_sprite` and `bevy_ui` to take advantage of the new
ordering
## Solution
- Move `Prepare` and `PrepareFlush` after `PhaseSortFlush`
- Add a `PrepareAssets` set that runs in parallel with other systems and
sets in the render schedule.
- Put prepare_assets systems in the `PrepareAssets` set
- If explicit dependencies are needed on Mesh or Material RenderAssets
then depend on the appropriate system.
- Add `ManageViews` and `ManageViewsFlush` sets between
`ExtractCommands` and Queue
- Move `queue_mesh*_bind_group` to the Prepare stage
- Rename them to `prepare_`
- Put systems that prepare resources (buffers, textures, etc.) into a
`PrepareResources` set inside `Prepare`
- Put the `prepare_..._bind_group` systems into a `PrepareBindGroup` set
after `PrepareResources`
- Move `prepare_lights` to the `ManageViews` set
- `prepare_lights` creates views and this must happen before `Queue`
- This system needs refactoring to stop handling all responsibilities
- Gather lights, sort, and create shadow map views. Store sorted light
entities in a resource
- Remove `BatchedPhaseItem`
- Replace `batch_range` with `batch_size` representing how many items to
skip after rendering the item or to skip the item entirely if
`batch_size` is 0.
- `queue_sprites` has been split into `queue_sprites` for queueing phase
items and `prepare_sprites` for batching after the `PhaseSort`
- `PhaseItem`s are still inserted in `queue_sprites`
- After sorting adjacent compatible sprite phase items are accumulated
into `SpriteBatch` components on the first entity of each batch,
containing a range of vertex indices. The associated `PhaseItem`'s
`batch_size` is updated appropriately.
- `SpriteBatch` items are then drawn skipping over the other items in
the batch based on the value in `batch_size`
- A very similar refactor was performed on `bevy_ui`
---
## Changelog
Changed:
- Reordered and reworked render app schedule sets. The main change is
that data is extracted, queued, sorted, and then prepared when the order
of data is known.
- Refactor `bevy_sprite` and `bevy_ui` to take advantage of the
reordering.
## Migration Guide
- Assets such as materials and meshes should now be created in
`PrepareAssets` e.g. `prepare_assets<Mesh>`
- Queueing entities to `RenderPhase`s continues to be done in `Queue`
e.g. `queue_sprites`
- Preparing resources (textures, buffers, etc.) should now be done in
`PrepareResources`, e.g. `prepare_prepass_textures`,
`prepare_mesh_uniforms`
- Prepare bind groups should now be done in `PrepareBindGroups` e.g.
`prepare_mesh_bind_group`
- Any batching or instancing can now be done in `Prepare` where the
order of the phase items is known e.g. `prepare_sprites`
## Next Steps
- Introduce some generic mechanism to ensure items that can be batched
are grouped in the phase item order, currently you could easily have
`[sprite at z 0, mesh at z 0, sprite at z 0]` preventing batching.
- Investigate improved orderings for building the MeshUniform buffer
- Implementing batching across the rest of bevy
---------
Co-authored-by: Robert Swain <robert.swain@gmail.com>
Co-authored-by: robtfm <50659922+robtfm@users.noreply.github.com>
2023-08-27 14:33:49 +00:00
|
|
|
mut previous_len: Local<usize>,
|
Reduce the size of MeshUniform to improve performance (#9416)
# Objective
- Significantly reduce the size of MeshUniform by only including
necessary data.
## Solution
Local to world, model transforms are affine. This means they only need a
4x3 matrix to represent them.
`MeshUniform` stores the current, and previous model transforms, and the
inverse transpose of the current model transform, all as 4x4 matrices.
Instead we can store the current, and previous model transforms as 4x3
matrices, and we only need the upper-left 3x3 part of the inverse
transpose of the current model transform. This change allows us to
reduce the serialized MeshUniform size from 208 bytes to 144 bytes,
which is over a 30% saving in data to serialize, and VRAM bandwidth and
space.
## Benchmarks
On an M1 Max, running `many_cubes -- sphere`, main is in yellow, this PR
is in red:
<img width="1484" alt="Screenshot 2023-08-11 at 02 36 43"
src="https://github.com/bevyengine/bevy/assets/302146/7d99c7b3-f2bb-4004-a8d0-4c00f755cb0d">
A reduction in frame time of ~14%.
---
## Changelog
- Changed: Redefined `MeshUniform` to improve performance by using 4x3
affine transforms and reconstructing 4x4 matrices in the shader. Helper
functions were added to `bevy_pbr::mesh_functions` to unpack the data.
`affine_to_square` converts the packed 4x3 in 3x4 matrix data to a 4x4
matrix. `mat2x4_f32_to_mat3x3` converts the 3x3 in mat2x4 + f32 matrix
data back into a 3x3.
## Migration Guide
Shader code before:
```
var model = mesh[instance_index].model;
```
Shader code after:
```
#import bevy_pbr::mesh_functions affine_to_square
var model = affine_to_square(mesh[instance_index].model);
```
2023-08-15 06:00:23 +00:00
|
|
|
render_device: Res<RenderDevice>,
|
|
|
|
render_queue: Res<RenderQueue>,
|
|
|
|
mut gpu_array_buffer: ResMut<GpuArrayBuffer<MeshUniform>>,
|
Reorder render sets, refactor bevy_sprite to take advantage (#9236)
This is a continuation of this PR: #8062
# Objective
- Reorder render schedule sets to allow data preparation when phase item
order is known to support improved batching
- Part of the batching/instancing etc plan from here:
https://github.com/bevyengine/bevy/issues/89#issuecomment-1379249074
- The original idea came from @inodentry and proved to be a good one.
Thanks!
- Refactor `bevy_sprite` and `bevy_ui` to take advantage of the new
ordering
## Solution
- Move `Prepare` and `PrepareFlush` after `PhaseSortFlush`
- Add a `PrepareAssets` set that runs in parallel with other systems and
sets in the render schedule.
- Put prepare_assets systems in the `PrepareAssets` set
- If explicit dependencies are needed on Mesh or Material RenderAssets
then depend on the appropriate system.
- Add `ManageViews` and `ManageViewsFlush` sets between
`ExtractCommands` and Queue
- Move `queue_mesh*_bind_group` to the Prepare stage
- Rename them to `prepare_`
- Put systems that prepare resources (buffers, textures, etc.) into a
`PrepareResources` set inside `Prepare`
- Put the `prepare_..._bind_group` systems into a `PrepareBindGroup` set
after `PrepareResources`
- Move `prepare_lights` to the `ManageViews` set
- `prepare_lights` creates views and this must happen before `Queue`
- This system needs refactoring to stop handling all responsibilities
- Gather lights, sort, and create shadow map views. Store sorted light
entities in a resource
- Remove `BatchedPhaseItem`
- Replace `batch_range` with `batch_size` representing how many items to
skip after rendering the item or to skip the item entirely if
`batch_size` is 0.
- `queue_sprites` has been split into `queue_sprites` for queueing phase
items and `prepare_sprites` for batching after the `PhaseSort`
- `PhaseItem`s are still inserted in `queue_sprites`
- After sorting adjacent compatible sprite phase items are accumulated
into `SpriteBatch` components on the first entity of each batch,
containing a range of vertex indices. The associated `PhaseItem`'s
`batch_size` is updated appropriately.
- `SpriteBatch` items are then drawn skipping over the other items in
the batch based on the value in `batch_size`
- A very similar refactor was performed on `bevy_ui`
---
## Changelog
Changed:
- Reordered and reworked render app schedule sets. The main change is
that data is extracted, queued, sorted, and then prepared when the order
of data is known.
- Refactor `bevy_sprite` and `bevy_ui` to take advantage of the
reordering.
## Migration Guide
- Assets such as materials and meshes should now be created in
`PrepareAssets` e.g. `prepare_assets<Mesh>`
- Queueing entities to `RenderPhase`s continues to be done in `Queue`
e.g. `queue_sprites`
- Preparing resources (textures, buffers, etc.) should now be done in
`PrepareResources`, e.g. `prepare_prepass_textures`,
`prepare_mesh_uniforms`
- Prepare bind groups should now be done in `PrepareBindGroups` e.g.
`prepare_mesh_bind_group`
- Any batching or instancing can now be done in `Prepare` where the
order of the phase items is known e.g. `prepare_sprites`
## Next Steps
- Introduce some generic mechanism to ensure items that can be batched
are grouped in the phase item order, currently you could easily have
`[sprite at z 0, mesh at z 0, sprite at z 0]` preventing batching.
- Investigate improved orderings for building the MeshUniform buffer
- Implementing batching across the rest of bevy
---------
Co-authored-by: Robert Swain <robert.swain@gmail.com>
Co-authored-by: robtfm <50659922+robtfm@users.noreply.github.com>
2023-08-27 14:33:49 +00:00
|
|
|
views: Query<(
|
|
|
|
&RenderPhase<Opaque3d>,
|
|
|
|
&RenderPhase<Transparent3d>,
|
|
|
|
&RenderPhase<AlphaMask3d>,
|
|
|
|
)>,
|
|
|
|
shadow_views: Query<&RenderPhase<Shadow>>,
|
|
|
|
meshes: Query<(Entity, &MeshTransforms)>,
|
Reduce the size of MeshUniform to improve performance (#9416)
# Objective
- Significantly reduce the size of MeshUniform by only including
necessary data.
## Solution
Local to world, model transforms are affine. This means they only need a
4x3 matrix to represent them.
`MeshUniform` stores the current, and previous model transforms, and the
inverse transpose of the current model transform, all as 4x4 matrices.
Instead we can store the current, and previous model transforms as 4x3
matrices, and we only need the upper-left 3x3 part of the inverse
transpose of the current model transform. This change allows us to
reduce the serialized MeshUniform size from 208 bytes to 144 bytes,
which is over a 30% saving in data to serialize, and VRAM bandwidth and
space.
## Benchmarks
On an M1 Max, running `many_cubes -- sphere`, main is in yellow, this PR
is in red:
<img width="1484" alt="Screenshot 2023-08-11 at 02 36 43"
src="https://github.com/bevyengine/bevy/assets/302146/7d99c7b3-f2bb-4004-a8d0-4c00f755cb0d">
A reduction in frame time of ~14%.
---
## Changelog
- Changed: Redefined `MeshUniform` to improve performance by using 4x3
affine transforms and reconstructing 4x4 matrices in the shader. Helper
functions were added to `bevy_pbr::mesh_functions` to unpack the data.
`affine_to_square` converts the packed 4x3 in 3x4 matrix data to a 4x4
matrix. `mat2x4_f32_to_mat3x3` converts the 3x3 in mat2x4 + f32 matrix
data back into a 3x3.
## Migration Guide
Shader code before:
```
var model = mesh[instance_index].model;
```
Shader code after:
```
#import bevy_pbr::mesh_functions affine_to_square
var model = affine_to_square(mesh[instance_index].model);
```
2023-08-15 06:00:23 +00:00
|
|
|
) {
|
|
|
|
gpu_array_buffer.clear();
|
Reorder render sets, refactor bevy_sprite to take advantage (#9236)
This is a continuation of this PR: #8062
# Objective
- Reorder render schedule sets to allow data preparation when phase item
order is known to support improved batching
- Part of the batching/instancing etc plan from here:
https://github.com/bevyengine/bevy/issues/89#issuecomment-1379249074
- The original idea came from @inodentry and proved to be a good one.
Thanks!
- Refactor `bevy_sprite` and `bevy_ui` to take advantage of the new
ordering
## Solution
- Move `Prepare` and `PrepareFlush` after `PhaseSortFlush`
- Add a `PrepareAssets` set that runs in parallel with other systems and
sets in the render schedule.
- Put prepare_assets systems in the `PrepareAssets` set
- If explicit dependencies are needed on Mesh or Material RenderAssets
then depend on the appropriate system.
- Add `ManageViews` and `ManageViewsFlush` sets between
`ExtractCommands` and Queue
- Move `queue_mesh*_bind_group` to the Prepare stage
- Rename them to `prepare_`
- Put systems that prepare resources (buffers, textures, etc.) into a
`PrepareResources` set inside `Prepare`
- Put the `prepare_..._bind_group` systems into a `PrepareBindGroup` set
after `PrepareResources`
- Move `prepare_lights` to the `ManageViews` set
- `prepare_lights` creates views and this must happen before `Queue`
- This system needs refactoring to stop handling all responsibilities
- Gather lights, sort, and create shadow map views. Store sorted light
entities in a resource
- Remove `BatchedPhaseItem`
- Replace `batch_range` with `batch_size` representing how many items to
skip after rendering the item or to skip the item entirely if
`batch_size` is 0.
- `queue_sprites` has been split into `queue_sprites` for queueing phase
items and `prepare_sprites` for batching after the `PhaseSort`
- `PhaseItem`s are still inserted in `queue_sprites`
- After sorting adjacent compatible sprite phase items are accumulated
into `SpriteBatch` components on the first entity of each batch,
containing a range of vertex indices. The associated `PhaseItem`'s
`batch_size` is updated appropriately.
- `SpriteBatch` items are then drawn skipping over the other items in
the batch based on the value in `batch_size`
- A very similar refactor was performed on `bevy_ui`
---
## Changelog
Changed:
- Reordered and reworked render app schedule sets. The main change is
that data is extracted, queued, sorted, and then prepared when the order
of data is known.
- Refactor `bevy_sprite` and `bevy_ui` to take advantage of the
reordering.
## Migration Guide
- Assets such as materials and meshes should now be created in
`PrepareAssets` e.g. `prepare_assets<Mesh>`
- Queueing entities to `RenderPhase`s continues to be done in `Queue`
e.g. `queue_sprites`
- Preparing resources (textures, buffers, etc.) should now be done in
`PrepareResources`, e.g. `prepare_prepass_textures`,
`prepare_mesh_uniforms`
- Prepare bind groups should now be done in `PrepareBindGroups` e.g.
`prepare_mesh_bind_group`
- Any batching or instancing can now be done in `Prepare` where the
order of the phase items is known e.g. `prepare_sprites`
## Next Steps
- Introduce some generic mechanism to ensure items that can be batched
are grouped in the phase item order, currently you could easily have
`[sprite at z 0, mesh at z 0, sprite at z 0]` preventing batching.
- Investigate improved orderings for building the MeshUniform buffer
- Implementing batching across the rest of bevy
---------
Co-authored-by: Robert Swain <robert.swain@gmail.com>
Co-authored-by: robtfm <50659922+robtfm@users.noreply.github.com>
2023-08-27 14:33:49 +00:00
|
|
|
seen.clear();
|
|
|
|
|
|
|
|
let mut indices = Vec::with_capacity(*previous_len);
|
|
|
|
let mut push_indices = |(mesh, mesh_uniform): (Entity, &MeshTransforms)| {
|
|
|
|
let index = mesh.index() as usize;
|
|
|
|
if !seen.contains(index) {
|
|
|
|
if index >= seen.len() {
|
|
|
|
seen.grow(index + 1);
|
|
|
|
}
|
|
|
|
seen.insert(index);
|
|
|
|
indices.push((mesh, gpu_array_buffer.push(mesh_uniform.into())));
|
|
|
|
}
|
|
|
|
};
|
Reduce the size of MeshUniform to improve performance (#9416)
# Objective
- Significantly reduce the size of MeshUniform by only including
necessary data.
## Solution
Local to world, model transforms are affine. This means they only need a
4x3 matrix to represent them.
`MeshUniform` stores the current, and previous model transforms, and the
inverse transpose of the current model transform, all as 4x4 matrices.
Instead we can store the current, and previous model transforms as 4x3
matrices, and we only need the upper-left 3x3 part of the inverse
transpose of the current model transform. This change allows us to
reduce the serialized MeshUniform size from 208 bytes to 144 bytes,
which is over a 30% saving in data to serialize, and VRAM bandwidth and
space.
## Benchmarks
On an M1 Max, running `many_cubes -- sphere`, main is in yellow, this PR
is in red:
<img width="1484" alt="Screenshot 2023-08-11 at 02 36 43"
src="https://github.com/bevyengine/bevy/assets/302146/7d99c7b3-f2bb-4004-a8d0-4c00f755cb0d">
A reduction in frame time of ~14%.
---
## Changelog
- Changed: Redefined `MeshUniform` to improve performance by using 4x3
affine transforms and reconstructing 4x4 matrices in the shader. Helper
functions were added to `bevy_pbr::mesh_functions` to unpack the data.
`affine_to_square` converts the packed 4x3 in 3x4 matrix data to a 4x4
matrix. `mat2x4_f32_to_mat3x3` converts the 3x3 in mat2x4 + f32 matrix
data back into a 3x3.
## Migration Guide
Shader code before:
```
var model = mesh[instance_index].model;
```
Shader code after:
```
#import bevy_pbr::mesh_functions affine_to_square
var model = affine_to_square(mesh[instance_index].model);
```
2023-08-15 06:00:23 +00:00
|
|
|
|
Reorder render sets, refactor bevy_sprite to take advantage (#9236)
This is a continuation of this PR: #8062
# Objective
- Reorder render schedule sets to allow data preparation when phase item
order is known to support improved batching
- Part of the batching/instancing etc plan from here:
https://github.com/bevyengine/bevy/issues/89#issuecomment-1379249074
- The original idea came from @inodentry and proved to be a good one.
Thanks!
- Refactor `bevy_sprite` and `bevy_ui` to take advantage of the new
ordering
## Solution
- Move `Prepare` and `PrepareFlush` after `PhaseSortFlush`
- Add a `PrepareAssets` set that runs in parallel with other systems and
sets in the render schedule.
- Put prepare_assets systems in the `PrepareAssets` set
- If explicit dependencies are needed on Mesh or Material RenderAssets
then depend on the appropriate system.
- Add `ManageViews` and `ManageViewsFlush` sets between
`ExtractCommands` and Queue
- Move `queue_mesh*_bind_group` to the Prepare stage
- Rename them to `prepare_`
- Put systems that prepare resources (buffers, textures, etc.) into a
`PrepareResources` set inside `Prepare`
- Put the `prepare_..._bind_group` systems into a `PrepareBindGroup` set
after `PrepareResources`
- Move `prepare_lights` to the `ManageViews` set
- `prepare_lights` creates views and this must happen before `Queue`
- This system needs refactoring to stop handling all responsibilities
- Gather lights, sort, and create shadow map views. Store sorted light
entities in a resource
- Remove `BatchedPhaseItem`
- Replace `batch_range` with `batch_size` representing how many items to
skip after rendering the item or to skip the item entirely if
`batch_size` is 0.
- `queue_sprites` has been split into `queue_sprites` for queueing phase
items and `prepare_sprites` for batching after the `PhaseSort`
- `PhaseItem`s are still inserted in `queue_sprites`
- After sorting adjacent compatible sprite phase items are accumulated
into `SpriteBatch` components on the first entity of each batch,
containing a range of vertex indices. The associated `PhaseItem`'s
`batch_size` is updated appropriately.
- `SpriteBatch` items are then drawn skipping over the other items in
the batch based on the value in `batch_size`
- A very similar refactor was performed on `bevy_ui`
---
## Changelog
Changed:
- Reordered and reworked render app schedule sets. The main change is
that data is extracted, queued, sorted, and then prepared when the order
of data is known.
- Refactor `bevy_sprite` and `bevy_ui` to take advantage of the
reordering.
## Migration Guide
- Assets such as materials and meshes should now be created in
`PrepareAssets` e.g. `prepare_assets<Mesh>`
- Queueing entities to `RenderPhase`s continues to be done in `Queue`
e.g. `queue_sprites`
- Preparing resources (textures, buffers, etc.) should now be done in
`PrepareResources`, e.g. `prepare_prepass_textures`,
`prepare_mesh_uniforms`
- Prepare bind groups should now be done in `PrepareBindGroups` e.g.
`prepare_mesh_bind_group`
- Any batching or instancing can now be done in `Prepare` where the
order of the phase items is known e.g. `prepare_sprites`
## Next Steps
- Introduce some generic mechanism to ensure items that can be batched
are grouped in the phase item order, currently you could easily have
`[sprite at z 0, mesh at z 0, sprite at z 0]` preventing batching.
- Investigate improved orderings for building the MeshUniform buffer
- Implementing batching across the rest of bevy
---------
Co-authored-by: Robert Swain <robert.swain@gmail.com>
Co-authored-by: robtfm <50659922+robtfm@users.noreply.github.com>
2023-08-27 14:33:49 +00:00
|
|
|
for (opaque_phase, transparent_phase, alpha_phase) in &views {
|
|
|
|
meshes
|
|
|
|
.iter_many(opaque_phase.iter_entities())
|
|
|
|
.for_each(&mut push_indices);
|
|
|
|
|
|
|
|
meshes
|
|
|
|
.iter_many(transparent_phase.iter_entities())
|
|
|
|
.for_each(&mut push_indices);
|
|
|
|
|
|
|
|
meshes
|
|
|
|
.iter_many(alpha_phase.iter_entities())
|
|
|
|
.for_each(&mut push_indices);
|
|
|
|
}
|
|
|
|
|
|
|
|
for shadow_phase in &shadow_views {
|
|
|
|
meshes
|
|
|
|
.iter_many(shadow_phase.iter_entities())
|
|
|
|
.for_each(&mut push_indices);
|
|
|
|
}
|
|
|
|
|
|
|
|
*previous_len = indices.len();
|
|
|
|
commands.insert_or_spawn_batch(indices);
|
Reduce the size of MeshUniform to improve performance (#9416)
# Objective
- Significantly reduce the size of MeshUniform by only including
necessary data.
## Solution
Local to world, model transforms are affine. This means they only need a
4x3 matrix to represent them.
`MeshUniform` stores the current, and previous model transforms, and the
inverse transpose of the current model transform, all as 4x4 matrices.
Instead we can store the current, and previous model transforms as 4x3
matrices, and we only need the upper-left 3x3 part of the inverse
transpose of the current model transform. This change allows us to
reduce the serialized MeshUniform size from 208 bytes to 144 bytes,
which is over a 30% saving in data to serialize, and VRAM bandwidth and
space.
## Benchmarks
On an M1 Max, running `many_cubes -- sphere`, main is in yellow, this PR
is in red:
<img width="1484" alt="Screenshot 2023-08-11 at 02 36 43"
src="https://github.com/bevyengine/bevy/assets/302146/7d99c7b3-f2bb-4004-a8d0-4c00f755cb0d">
A reduction in frame time of ~14%.
---
## Changelog
- Changed: Redefined `MeshUniform` to improve performance by using 4x3
affine transforms and reconstructing 4x4 matrices in the shader. Helper
functions were added to `bevy_pbr::mesh_functions` to unpack the data.
`affine_to_square` converts the packed 4x3 in 3x4 matrix data to a 4x4
matrix. `mat2x4_f32_to_mat3x3` converts the 3x3 in mat2x4 + f32 matrix
data back into a 3x3.
## Migration Guide
Shader code before:
```
var model = mesh[instance_index].model;
```
Shader code after:
```
#import bevy_pbr::mesh_functions affine_to_square
var model = affine_to_square(mesh[instance_index].model);
```
2023-08-15 06:00:23 +00:00
|
|
|
|
|
|
|
gpu_array_buffer.write_buffer(&render_device, &render_queue);
|
|
|
|
}
|
|
|
|
|
Make `Resource` trait opt-in, requiring `#[derive(Resource)]` V2 (#5577)
*This PR description is an edited copy of #5007, written by @alice-i-cecile.*
# Objective
Follow-up to https://github.com/bevyengine/bevy/pull/2254. The `Resource` trait currently has a blanket implementation for all types that meet its bounds.
While ergonomic, this results in several drawbacks:
* it is possible to make confusing, silent mistakes such as inserting a function pointer (Foo) rather than a value (Foo::Bar) as a resource
* it is challenging to discover if a type is intended to be used as a resource
* we cannot later add customization options (see the [RFC](https://github.com/bevyengine/rfcs/blob/main/rfcs/27-derive-component.md) for the equivalent choice for Component).
* dependencies can use the same Rust type as a resource in invisibly conflicting ways
* raw Rust types used as resources cannot preserve privacy appropriately, as anyone able to access that type can read and write to internal values
* we cannot capture a definitive list of possible resources to display to users in an editor
## Notes to reviewers
* Review this commit-by-commit; there's effectively no back-tracking and there's a lot of churn in some of these commits.
*ira: My commits are not as well organized :')*
* I've relaxed the bound on Local to Send + Sync + 'static: I don't think these concerns apply there, so this can keep things simple. Storing e.g. a u32 in a Local is fine, because there's a variable name attached explaining what it does.
* I think this is a bad place for the Resource trait to live, but I've left it in place to make reviewing easier. IMO that's best tackled with https://github.com/bevyengine/bevy/issues/4981.
## Changelog
`Resource` is no longer automatically implemented for all matching types. Instead, use the new `#[derive(Resource)]` macro.
## Migration Guide
Add `#[derive(Resource)]` to all types you are using as a resource.
If you are using a third party type as a resource, wrap it in a tuple struct to bypass orphan rules. Consider deriving `Deref` and `DerefMut` to improve ergonomics.
`ClearColor` no longer implements `Component`. Using `ClearColor` as a component in 0.8 did nothing.
Use the `ClearColorConfig` in the `Camera3d` and `Camera2d` components instead.
Co-authored-by: Alice <alice.i.cecile@gmail.com>
Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
Co-authored-by: devil-ira <justthecooldude@gmail.com>
Co-authored-by: Carter Anderson <mcanders1@gmail.com>
2022-08-08 21:36:35 +00:00
|
|
|
#[derive(Resource, Clone)]
|
Shader Imports. Decouple Mesh logic from PBR (#3137)
## Shader Imports
This adds "whole file" shader imports. These come in two flavors:
### Asset Path Imports
```rust
// /assets/shaders/custom.wgsl
#import "shaders/custom_material.wgsl"
[[stage(fragment)]]
fn fragment() -> [[location(0)]] vec4<f32> {
return get_color();
}
```
```rust
// /assets/shaders/custom_material.wgsl
[[block]]
struct CustomMaterial {
color: vec4<f32>;
};
[[group(1), binding(0)]]
var<uniform> material: CustomMaterial;
```
### Custom Path Imports
Enables defining custom import paths. These are intended to be used by crates to export shader functionality:
```rust
// bevy_pbr2/src/render/pbr.wgsl
#import bevy_pbr::mesh_view_bind_group
#import bevy_pbr::mesh_bind_group
[[block]]
struct StandardMaterial {
base_color: vec4<f32>;
emissive: vec4<f32>;
perceptual_roughness: f32;
metallic: f32;
reflectance: f32;
flags: u32;
};
/* rest of PBR fragment shader here */
```
```rust
impl Plugin for MeshRenderPlugin {
fn build(&self, app: &mut bevy_app::App) {
let mut shaders = app.world.get_resource_mut::<Assets<Shader>>().unwrap();
shaders.set_untracked(
MESH_BIND_GROUP_HANDLE,
Shader::from_wgsl(include_str!("mesh_bind_group.wgsl"))
.with_import_path("bevy_pbr::mesh_bind_group"),
);
shaders.set_untracked(
MESH_VIEW_BIND_GROUP_HANDLE,
Shader::from_wgsl(include_str!("mesh_view_bind_group.wgsl"))
.with_import_path("bevy_pbr::mesh_view_bind_group"),
);
```
By convention these should use rust-style module paths that start with the crate name. Ultimately we might enforce this convention.
Note that this feature implements _run time_ import resolution. Ultimately we should move the import logic into an asset preprocessor once Bevy gets support for that.
## Decouple Mesh Logic from PBR Logic via MeshRenderPlugin
This breaks out mesh rendering code from PBR material code, which improves the legibility of the code, decouples mesh logic from PBR logic, and opens the door for a future `MaterialPlugin<T: Material>` that handles all of the pipeline setup for arbitrary shader materials.
## Removed `RenderAsset<Shader>` in favor of extracting shaders into RenderPipelineCache
This simplifies the shader import implementation and removes the need to pass around `RenderAssets<Shader>`.
## RenderCommands are now fallible
This allows us to cleanly handle pipelines+shaders not being ready yet. We can abort a render command early in these cases, preventing bevy from trying to bind group / do draw calls for pipelines that couldn't be bound. This could also be used in the future for things like "components not existing on entities yet".
# Next Steps
* Investigate using Naga for "partial typed imports" (ex: `#import bevy_pbr::material::StandardMaterial`, which would import only the StandardMaterial struct)
* Implement `MaterialPlugin<T: Material>` for low-boilerplate custom material shaders
* Move shader import logic into the asset preprocessor once bevy gets support for that.
Fixes #3132
2021-11-18 03:45:02 +00:00
|
|
|
pub struct MeshPipeline {
|
|
|
|
pub view_layout: BindGroupLayout,
|
2023-01-19 22:11:13 +00:00
|
|
|
pub view_layout_multisampled: BindGroupLayout,
|
Shader Imports. Decouple Mesh logic from PBR (#3137)
## Shader Imports
This adds "whole file" shader imports. These come in two flavors:
### Asset Path Imports
```rust
// /assets/shaders/custom.wgsl
#import "shaders/custom_material.wgsl"
[[stage(fragment)]]
fn fragment() -> [[location(0)]] vec4<f32> {
return get_color();
}
```
```rust
// /assets/shaders/custom_material.wgsl
[[block]]
struct CustomMaterial {
color: vec4<f32>;
};
[[group(1), binding(0)]]
var<uniform> material: CustomMaterial;
```
### Custom Path Imports
Enables defining custom import paths. These are intended to be used by crates to export shader functionality:
```rust
// bevy_pbr2/src/render/pbr.wgsl
#import bevy_pbr::mesh_view_bind_group
#import bevy_pbr::mesh_bind_group
[[block]]
struct StandardMaterial {
base_color: vec4<f32>;
emissive: vec4<f32>;
perceptual_roughness: f32;
metallic: f32;
reflectance: f32;
flags: u32;
};
/* rest of PBR fragment shader here */
```
```rust
impl Plugin for MeshRenderPlugin {
fn build(&self, app: &mut bevy_app::App) {
let mut shaders = app.world.get_resource_mut::<Assets<Shader>>().unwrap();
shaders.set_untracked(
MESH_BIND_GROUP_HANDLE,
Shader::from_wgsl(include_str!("mesh_bind_group.wgsl"))
.with_import_path("bevy_pbr::mesh_bind_group"),
);
shaders.set_untracked(
MESH_VIEW_BIND_GROUP_HANDLE,
Shader::from_wgsl(include_str!("mesh_view_bind_group.wgsl"))
.with_import_path("bevy_pbr::mesh_view_bind_group"),
);
```
By convention these should use rust-style module paths that start with the crate name. Ultimately we might enforce this convention.
Note that this feature implements _run time_ import resolution. Ultimately we should move the import logic into an asset preprocessor once Bevy gets support for that.
## Decouple Mesh Logic from PBR Logic via MeshRenderPlugin
This breaks out mesh rendering code from PBR material code, which improves the legibility of the code, decouples mesh logic from PBR logic, and opens the door for a future `MaterialPlugin<T: Material>` that handles all of the pipeline setup for arbitrary shader materials.
## Removed `RenderAsset<Shader>` in favor of extracting shaders into RenderPipelineCache
This simplifies the shader import implementation and removes the need to pass around `RenderAssets<Shader>`.
## RenderCommands are now fallible
This allows us to cleanly handle pipelines+shaders not being ready yet. We can abort a render command early in these cases, preventing bevy from trying to bind group / do draw calls for pipelines that couldn't be bound. This could also be used in the future for things like "components not existing on entities yet".
# Next Steps
* Investigate using Naga for "partial typed imports" (ex: `#import bevy_pbr::material::StandardMaterial`, which would import only the StandardMaterial struct)
* Implement `MaterialPlugin<T: Material>` for low-boilerplate custom material shaders
* Move shader import logic into the asset preprocessor once bevy gets support for that.
Fixes #3132
2021-11-18 03:45:02 +00:00
|
|
|
// This dummy white texture is to be used in place of optional StandardMaterial textures
|
|
|
|
pub dummy_white_gpu_image: GpuImage,
|
2022-04-07 16:16:35 +00:00
|
|
|
pub clustered_forward_buffer_binding_type: BufferBindingType,
|
Add morph targets (#8158)
# Objective
- Add morph targets to `bevy_pbr` (closes #5756) & load them from glTF
- Supersedes #3722
- Fixes #6814
[Morph targets][1] (also known as shape interpolation, shape keys, or
blend shapes) allow animating individual vertices with fine grained
controls. This is typically used for facial expressions. By specifying
multiple poses as vertex offset, and providing a set of weight of each
pose, it is possible to define surprisingly realistic transitions
between poses. Blending between multiple poses also allow composition.
Morph targets are part of the [gltf standard][2] and are a feature of
Unity and Unreal, and babylone.js, it is only natural to implement them
in bevy.
## Solution
This implementation of morph targets uses a 3d texture where each pixel
is a component of an animated attribute. Each layer is a different
target. We use a 2d texture for each target, because the number of
attribute×components×animated vertices is expected to always exceed the
maximum pixel row size limit of webGL2. It copies fairly closely the way
skinning is implemented on the CPU side, while on the GPU side, the
shader morph target implementation is a relatively trivial detail.
We add an optional `morph_texture` to the `Mesh` struct. The
`morph_texture` is built through a method that accepts an iterator over
attribute buffers.
The `MorphWeights` component, user-accessible, controls the blend of
poses used by mesh instances (so that multiple copy of the same mesh may
have different weights), all the weights are uploaded to a uniform
buffer of 256 `f32`. We limit to 16 poses per mesh, and a total of 256
poses.
More literature:
* Old babylone.js implementation (vertex attribute-based):
https://www.eternalcoding.com/dev-log-1-morph-targets/
* Babylone.js implementation (similar to ours):
https://www.youtube.com/watch?v=LBPRmGgU0PE
* GPU gems 3:
https://developer.nvidia.com/gpugems/gpugems3/part-i-geometry/chapter-3-directx-10-blend-shapes-breaking-limits
* Development discord thread
https://discord.com/channels/691052431525675048/1083325980615114772
https://user-images.githubusercontent.com/26321040/231181046-3bca2ab2-d4d9-472e-8098-639f1871ce2e.mp4
https://github.com/bevyengine/bevy/assets/26321040/d2a0c544-0ef8-45cf-9f99-8c3792f5a258
## Acknowledgements
* Thanks to `storytold` for sponsoring the feature
* Thanks to `superdump` and `james7132` for guidance and help figuring
out stuff
## Future work
- Handling of less and more attributes (eg: animated uv, animated
arbitrary attributes)
- Dynamic pose allocation (so that zero-weighted poses aren't uploaded
to GPU for example, enables much more total poses)
- Better animation API, see #8357
----
## Changelog
- Add morph targets to bevy meshes
- Support up to 64 poses per mesh of individually up to 116508 vertices,
animation currently strictly limited to the position, normal and tangent
attributes.
- Load a morph target using `Mesh::set_morph_targets`
- Add `VisitMorphTargets` and `VisitMorphAttributes` traits to
`bevy_render`, this allows defining morph targets (a fairly complex and
nested data structure) through iterators (ie: single copy instead of
passing around buffers), see documentation of those traits for details
- Add `MorphWeights` component exported by `bevy_render`
- `MorphWeights` control mesh's morph target weights, blending between
various poses defined as morph targets.
- `MorphWeights` are directly inherited by direct children (single level
of hierarchy) of an entity. This allows controlling several mesh
primitives through a unique entity _as per GLTF spec_.
- Add `MorphTargetNames` component, naming each indices of loaded morph
targets.
- Load morph targets weights and buffers in `bevy_gltf`
- handle morph targets animations in `bevy_animation` (previously, it
was a `warn!` log)
- Add the `MorphStressTest.gltf` asset for morph targets testing, taken
from the glTF samples repo, CC0.
- Add morph target manipulation to `scene_viewer`
- Separate the animation code in `scene_viewer` from the rest of the
code, reducing `#[cfg(feature)]` noise
- Add the `morph_targets.rs` example to show off how to manipulate morph
targets, loading `MorpStressTest.gltf`
## Migration Guide
- (very specialized, unlikely to be touched by 3rd parties)
- `MeshPipeline` now has a single `mesh_layouts` field rather than
separate `mesh_layout` and `skinned_mesh_layout` fields. You should
handle all possible mesh bind group layouts in your implementation
- You should also handle properly the new `MORPH_TARGETS` shader def and
mesh pipeline key. A new function is exposed to make this easier:
`setup_moprh_and_skinning_defs`
- The `MeshBindGroup` is now `MeshBindGroups`, cached bind groups are
now accessed through the `get` method.
[1]: https://en.wikipedia.org/wiki/Morph_target_animation
[2]:
https://registry.khronos.org/glTF/specs/2.0/glTF-2.0.html#morph-targets
---------
Co-authored-by: François <mockersf@gmail.com>
Co-authored-by: Carter Anderson <mcanders1@gmail.com>
2023-06-22 20:00:01 +00:00
|
|
|
pub mesh_layouts: MeshLayouts,
|
2023-07-31 13:14:40 +00:00
|
|
|
/// `MeshUniform`s are stored in arrays in buffers. If storage buffers are available, they
|
|
|
|
/// are used and this will be `None`, otherwise uniform buffers will be used with batches
|
|
|
|
/// of this many `MeshUniform`s, stored at dynamic offsets within the uniform buffer.
|
|
|
|
/// Use code like this in custom shaders:
|
|
|
|
/// ```wgsl
|
|
|
|
/// ##ifdef PER_OBJECT_BUFFER_BATCH_SIZE
|
|
|
|
/// @group(2) @binding(0)
|
|
|
|
/// var<uniform> mesh: array<Mesh, #{PER_OBJECT_BUFFER_BATCH_SIZE}u>;
|
|
|
|
/// ##else
|
|
|
|
/// @group(2) @binding(0)
|
|
|
|
/// var<storage> mesh: array<Mesh>;
|
|
|
|
/// ##endif // PER_OBJECT_BUFFER_BATCH_SIZE
|
|
|
|
/// ```
|
|
|
|
pub per_object_buffer_batch_size: Option<u32>,
|
Shader Imports. Decouple Mesh logic from PBR (#3137)
## Shader Imports
This adds "whole file" shader imports. These come in two flavors:
### Asset Path Imports
```rust
// /assets/shaders/custom.wgsl
#import "shaders/custom_material.wgsl"
[[stage(fragment)]]
fn fragment() -> [[location(0)]] vec4<f32> {
return get_color();
}
```
```rust
// /assets/shaders/custom_material.wgsl
[[block]]
struct CustomMaterial {
color: vec4<f32>;
};
[[group(1), binding(0)]]
var<uniform> material: CustomMaterial;
```
### Custom Path Imports
Enables defining custom import paths. These are intended to be used by crates to export shader functionality:
```rust
// bevy_pbr2/src/render/pbr.wgsl
#import bevy_pbr::mesh_view_bind_group
#import bevy_pbr::mesh_bind_group
[[block]]
struct StandardMaterial {
base_color: vec4<f32>;
emissive: vec4<f32>;
perceptual_roughness: f32;
metallic: f32;
reflectance: f32;
flags: u32;
};
/* rest of PBR fragment shader here */
```
```rust
impl Plugin for MeshRenderPlugin {
fn build(&self, app: &mut bevy_app::App) {
let mut shaders = app.world.get_resource_mut::<Assets<Shader>>().unwrap();
shaders.set_untracked(
MESH_BIND_GROUP_HANDLE,
Shader::from_wgsl(include_str!("mesh_bind_group.wgsl"))
.with_import_path("bevy_pbr::mesh_bind_group"),
);
shaders.set_untracked(
MESH_VIEW_BIND_GROUP_HANDLE,
Shader::from_wgsl(include_str!("mesh_view_bind_group.wgsl"))
.with_import_path("bevy_pbr::mesh_view_bind_group"),
);
```
By convention these should use rust-style module paths that start with the crate name. Ultimately we might enforce this convention.
Note that this feature implements _run time_ import resolution. Ultimately we should move the import logic into an asset preprocessor once Bevy gets support for that.
## Decouple Mesh Logic from PBR Logic via MeshRenderPlugin
This breaks out mesh rendering code from PBR material code, which improves the legibility of the code, decouples mesh logic from PBR logic, and opens the door for a future `MaterialPlugin<T: Material>` that handles all of the pipeline setup for arbitrary shader materials.
## Removed `RenderAsset<Shader>` in favor of extracting shaders into RenderPipelineCache
This simplifies the shader import implementation and removes the need to pass around `RenderAssets<Shader>`.
## RenderCommands are now fallible
This allows us to cleanly handle pipelines+shaders not being ready yet. We can abort a render command early in these cases, preventing bevy from trying to bind group / do draw calls for pipelines that couldn't be bound. This could also be used in the future for things like "components not existing on entities yet".
# Next Steps
* Investigate using Naga for "partial typed imports" (ex: `#import bevy_pbr::material::StandardMaterial`, which would import only the StandardMaterial struct)
* Implement `MaterialPlugin<T: Material>` for low-boilerplate custom material shaders
* Move shader import logic into the asset preprocessor once bevy gets support for that.
Fixes #3132
2021-11-18 03:45:02 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
impl FromWorld for MeshPipeline {
|
|
|
|
fn from_world(world: &mut World) -> Self {
|
2022-06-11 09:13:37 +00:00
|
|
|
let mut system_state: SystemState<(
|
|
|
|
Res<RenderDevice>,
|
|
|
|
Res<DefaultImageSampler>,
|
|
|
|
Res<RenderQueue>,
|
|
|
|
)> = SystemState::new(world);
|
2022-10-26 20:13:59 +00:00
|
|
|
let (render_device, default_sampler, render_queue) = system_state.get_mut(world);
|
2022-04-07 16:16:35 +00:00
|
|
|
let clustered_forward_buffer_binding_type = render_device
|
|
|
|
.get_supported_read_only_binding_type(CLUSTERED_FORWARD_STORAGE_BUFFER_COUNT);
|
Migrate to encase from crevice (#4339)
# Objective
- Unify buffer APIs
- Also see #4272
## Solution
- Replace vendored `crevice` with `encase`
---
## Changelog
Changed `StorageBuffer`
Added `DynamicStorageBuffer`
Replaced `UniformVec` with `UniformBuffer`
Replaced `DynamicUniformVec` with `DynamicUniformBuffer`
## Migration Guide
### `StorageBuffer`
removed `set_body()`, `values()`, `values_mut()`, `clear()`, `push()`, `append()`
added `set()`, `get()`, `get_mut()`
### `UniformVec` -> `UniformBuffer`
renamed `uniform_buffer()` to `buffer()`
removed `len()`, `is_empty()`, `capacity()`, `push()`, `reserve()`, `clear()`, `values()`
added `set()`, `get()`
### `DynamicUniformVec` -> `DynamicUniformBuffer`
renamed `uniform_buffer()` to `buffer()`
removed `capacity()`, `reserve()`
Co-authored-by: Carter Anderson <mcanders1@gmail.com>
2022-05-18 21:09:21 +00:00
|
|
|
|
2023-01-19 22:11:13 +00:00
|
|
|
/// Returns the appropriate bind group layout vec based on the parameters
|
|
|
|
fn layout_entries(
|
|
|
|
clustered_forward_buffer_binding_type: BufferBindingType,
|
|
|
|
multisampled: bool,
|
|
|
|
) -> Vec<BindGroupLayoutEntry> {
|
|
|
|
let mut entries = vec![
|
Shader Imports. Decouple Mesh logic from PBR (#3137)
## Shader Imports
This adds "whole file" shader imports. These come in two flavors:
### Asset Path Imports
```rust
// /assets/shaders/custom.wgsl
#import "shaders/custom_material.wgsl"
[[stage(fragment)]]
fn fragment() -> [[location(0)]] vec4<f32> {
return get_color();
}
```
```rust
// /assets/shaders/custom_material.wgsl
[[block]]
struct CustomMaterial {
color: vec4<f32>;
};
[[group(1), binding(0)]]
var<uniform> material: CustomMaterial;
```
### Custom Path Imports
Enables defining custom import paths. These are intended to be used by crates to export shader functionality:
```rust
// bevy_pbr2/src/render/pbr.wgsl
#import bevy_pbr::mesh_view_bind_group
#import bevy_pbr::mesh_bind_group
[[block]]
struct StandardMaterial {
base_color: vec4<f32>;
emissive: vec4<f32>;
perceptual_roughness: f32;
metallic: f32;
reflectance: f32;
flags: u32;
};
/* rest of PBR fragment shader here */
```
```rust
impl Plugin for MeshRenderPlugin {
fn build(&self, app: &mut bevy_app::App) {
let mut shaders = app.world.get_resource_mut::<Assets<Shader>>().unwrap();
shaders.set_untracked(
MESH_BIND_GROUP_HANDLE,
Shader::from_wgsl(include_str!("mesh_bind_group.wgsl"))
.with_import_path("bevy_pbr::mesh_bind_group"),
);
shaders.set_untracked(
MESH_VIEW_BIND_GROUP_HANDLE,
Shader::from_wgsl(include_str!("mesh_view_bind_group.wgsl"))
.with_import_path("bevy_pbr::mesh_view_bind_group"),
);
```
By convention these should use rust-style module paths that start with the crate name. Ultimately we might enforce this convention.
Note that this feature implements _run time_ import resolution. Ultimately we should move the import logic into an asset preprocessor once Bevy gets support for that.
## Decouple Mesh Logic from PBR Logic via MeshRenderPlugin
This breaks out mesh rendering code from PBR material code, which improves the legibility of the code, decouples mesh logic from PBR logic, and opens the door for a future `MaterialPlugin<T: Material>` that handles all of the pipeline setup for arbitrary shader materials.
## Removed `RenderAsset<Shader>` in favor of extracting shaders into RenderPipelineCache
This simplifies the shader import implementation and removes the need to pass around `RenderAssets<Shader>`.
## RenderCommands are now fallible
This allows us to cleanly handle pipelines+shaders not being ready yet. We can abort a render command early in these cases, preventing bevy from trying to bind group / do draw calls for pipelines that couldn't be bound. This could also be used in the future for things like "components not existing on entities yet".
# Next Steps
* Investigate using Naga for "partial typed imports" (ex: `#import bevy_pbr::material::StandardMaterial`, which would import only the StandardMaterial struct)
* Implement `MaterialPlugin<T: Material>` for low-boilerplate custom material shaders
* Move shader import logic into the asset preprocessor once bevy gets support for that.
Fixes #3132
2021-11-18 03:45:02 +00:00
|
|
|
// View
|
|
|
|
BindGroupLayoutEntry {
|
|
|
|
binding: 0,
|
|
|
|
visibility: ShaderStages::VERTEX | ShaderStages::FRAGMENT,
|
|
|
|
ty: BindingType::Buffer {
|
|
|
|
ty: BufferBindingType::Uniform,
|
|
|
|
has_dynamic_offset: true,
|
Migrate to encase from crevice (#4339)
# Objective
- Unify buffer APIs
- Also see #4272
## Solution
- Replace vendored `crevice` with `encase`
---
## Changelog
Changed `StorageBuffer`
Added `DynamicStorageBuffer`
Replaced `UniformVec` with `UniformBuffer`
Replaced `DynamicUniformVec` with `DynamicUniformBuffer`
## Migration Guide
### `StorageBuffer`
removed `set_body()`, `values()`, `values_mut()`, `clear()`, `push()`, `append()`
added `set()`, `get()`, `get_mut()`
### `UniformVec` -> `UniformBuffer`
renamed `uniform_buffer()` to `buffer()`
removed `len()`, `is_empty()`, `capacity()`, `push()`, `reserve()`, `clear()`, `values()`
added `set()`, `get()`
### `DynamicUniformVec` -> `DynamicUniformBuffer`
renamed `uniform_buffer()` to `buffer()`
removed `capacity()`, `reserve()`
Co-authored-by: Carter Anderson <mcanders1@gmail.com>
2022-05-18 21:09:21 +00:00
|
|
|
min_binding_size: Some(ViewUniform::min_size()),
|
Shader Imports. Decouple Mesh logic from PBR (#3137)
## Shader Imports
This adds "whole file" shader imports. These come in two flavors:
### Asset Path Imports
```rust
// /assets/shaders/custom.wgsl
#import "shaders/custom_material.wgsl"
[[stage(fragment)]]
fn fragment() -> [[location(0)]] vec4<f32> {
return get_color();
}
```
```rust
// /assets/shaders/custom_material.wgsl
[[block]]
struct CustomMaterial {
color: vec4<f32>;
};
[[group(1), binding(0)]]
var<uniform> material: CustomMaterial;
```
### Custom Path Imports
Enables defining custom import paths. These are intended to be used by crates to export shader functionality:
```rust
// bevy_pbr2/src/render/pbr.wgsl
#import bevy_pbr::mesh_view_bind_group
#import bevy_pbr::mesh_bind_group
[[block]]
struct StandardMaterial {
base_color: vec4<f32>;
emissive: vec4<f32>;
perceptual_roughness: f32;
metallic: f32;
reflectance: f32;
flags: u32;
};
/* rest of PBR fragment shader here */
```
```rust
impl Plugin for MeshRenderPlugin {
fn build(&self, app: &mut bevy_app::App) {
let mut shaders = app.world.get_resource_mut::<Assets<Shader>>().unwrap();
shaders.set_untracked(
MESH_BIND_GROUP_HANDLE,
Shader::from_wgsl(include_str!("mesh_bind_group.wgsl"))
.with_import_path("bevy_pbr::mesh_bind_group"),
);
shaders.set_untracked(
MESH_VIEW_BIND_GROUP_HANDLE,
Shader::from_wgsl(include_str!("mesh_view_bind_group.wgsl"))
.with_import_path("bevy_pbr::mesh_view_bind_group"),
);
```
By convention these should use rust-style module paths that start with the crate name. Ultimately we might enforce this convention.
Note that this feature implements _run time_ import resolution. Ultimately we should move the import logic into an asset preprocessor once Bevy gets support for that.
## Decouple Mesh Logic from PBR Logic via MeshRenderPlugin
This breaks out mesh rendering code from PBR material code, which improves the legibility of the code, decouples mesh logic from PBR logic, and opens the door for a future `MaterialPlugin<T: Material>` that handles all of the pipeline setup for arbitrary shader materials.
## Removed `RenderAsset<Shader>` in favor of extracting shaders into RenderPipelineCache
This simplifies the shader import implementation and removes the need to pass around `RenderAssets<Shader>`.
## RenderCommands are now fallible
This allows us to cleanly handle pipelines+shaders not being ready yet. We can abort a render command early in these cases, preventing bevy from trying to bind group / do draw calls for pipelines that couldn't be bound. This could also be used in the future for things like "components not existing on entities yet".
# Next Steps
* Investigate using Naga for "partial typed imports" (ex: `#import bevy_pbr::material::StandardMaterial`, which would import only the StandardMaterial struct)
* Implement `MaterialPlugin<T: Material>` for low-boilerplate custom material shaders
* Move shader import logic into the asset preprocessor once bevy gets support for that.
Fixes #3132
2021-11-18 03:45:02 +00:00
|
|
|
},
|
|
|
|
count: None,
|
|
|
|
},
|
|
|
|
// Lights
|
|
|
|
BindGroupLayoutEntry {
|
|
|
|
binding: 1,
|
|
|
|
visibility: ShaderStages::FRAGMENT,
|
|
|
|
ty: BindingType::Buffer {
|
|
|
|
ty: BufferBindingType::Uniform,
|
|
|
|
has_dynamic_offset: true,
|
Migrate to encase from crevice (#4339)
# Objective
- Unify buffer APIs
- Also see #4272
## Solution
- Replace vendored `crevice` with `encase`
---
## Changelog
Changed `StorageBuffer`
Added `DynamicStorageBuffer`
Replaced `UniformVec` with `UniformBuffer`
Replaced `DynamicUniformVec` with `DynamicUniformBuffer`
## Migration Guide
### `StorageBuffer`
removed `set_body()`, `values()`, `values_mut()`, `clear()`, `push()`, `append()`
added `set()`, `get()`, `get_mut()`
### `UniformVec` -> `UniformBuffer`
renamed `uniform_buffer()` to `buffer()`
removed `len()`, `is_empty()`, `capacity()`, `push()`, `reserve()`, `clear()`, `values()`
added `set()`, `get()`
### `DynamicUniformVec` -> `DynamicUniformBuffer`
renamed `uniform_buffer()` to `buffer()`
removed `capacity()`, `reserve()`
Co-authored-by: Carter Anderson <mcanders1@gmail.com>
2022-05-18 21:09:21 +00:00
|
|
|
min_binding_size: Some(GpuLights::min_size()),
|
Shader Imports. Decouple Mesh logic from PBR (#3137)
## Shader Imports
This adds "whole file" shader imports. These come in two flavors:
### Asset Path Imports
```rust
// /assets/shaders/custom.wgsl
#import "shaders/custom_material.wgsl"
[[stage(fragment)]]
fn fragment() -> [[location(0)]] vec4<f32> {
return get_color();
}
```
```rust
// /assets/shaders/custom_material.wgsl
[[block]]
struct CustomMaterial {
color: vec4<f32>;
};
[[group(1), binding(0)]]
var<uniform> material: CustomMaterial;
```
### Custom Path Imports
Enables defining custom import paths. These are intended to be used by crates to export shader functionality:
```rust
// bevy_pbr2/src/render/pbr.wgsl
#import bevy_pbr::mesh_view_bind_group
#import bevy_pbr::mesh_bind_group
[[block]]
struct StandardMaterial {
base_color: vec4<f32>;
emissive: vec4<f32>;
perceptual_roughness: f32;
metallic: f32;
reflectance: f32;
flags: u32;
};
/* rest of PBR fragment shader here */
```
```rust
impl Plugin for MeshRenderPlugin {
fn build(&self, app: &mut bevy_app::App) {
let mut shaders = app.world.get_resource_mut::<Assets<Shader>>().unwrap();
shaders.set_untracked(
MESH_BIND_GROUP_HANDLE,
Shader::from_wgsl(include_str!("mesh_bind_group.wgsl"))
.with_import_path("bevy_pbr::mesh_bind_group"),
);
shaders.set_untracked(
MESH_VIEW_BIND_GROUP_HANDLE,
Shader::from_wgsl(include_str!("mesh_view_bind_group.wgsl"))
.with_import_path("bevy_pbr::mesh_view_bind_group"),
);
```
By convention these should use rust-style module paths that start with the crate name. Ultimately we might enforce this convention.
Note that this feature implements _run time_ import resolution. Ultimately we should move the import logic into an asset preprocessor once Bevy gets support for that.
## Decouple Mesh Logic from PBR Logic via MeshRenderPlugin
This breaks out mesh rendering code from PBR material code, which improves the legibility of the code, decouples mesh logic from PBR logic, and opens the door for a future `MaterialPlugin<T: Material>` that handles all of the pipeline setup for arbitrary shader materials.
## Removed `RenderAsset<Shader>` in favor of extracting shaders into RenderPipelineCache
This simplifies the shader import implementation and removes the need to pass around `RenderAssets<Shader>`.
## RenderCommands are now fallible
This allows us to cleanly handle pipelines+shaders not being ready yet. We can abort a render command early in these cases, preventing bevy from trying to bind group / do draw calls for pipelines that couldn't be bound. This could also be used in the future for things like "components not existing on entities yet".
# Next Steps
* Investigate using Naga for "partial typed imports" (ex: `#import bevy_pbr::material::StandardMaterial`, which would import only the StandardMaterial struct)
* Implement `MaterialPlugin<T: Material>` for low-boilerplate custom material shaders
* Move shader import logic into the asset preprocessor once bevy gets support for that.
Fixes #3132
2021-11-18 03:45:02 +00:00
|
|
|
},
|
|
|
|
count: None,
|
|
|
|
},
|
|
|
|
// Point Shadow Texture Cube Array
|
|
|
|
BindGroupLayoutEntry {
|
|
|
|
binding: 2,
|
|
|
|
visibility: ShaderStages::FRAGMENT,
|
|
|
|
ty: BindingType::Texture {
|
|
|
|
multisampled: false,
|
|
|
|
sample_type: TextureSampleType::Depth,
|
Webgpu support (#8336)
# Objective
- Support WebGPU
- alternative to #5027 that doesn't need any async / await
- fixes #8315
- Surprise fix #7318
## Solution
### For async renderer initialisation
- Update the plugin lifecycle:
- app builds the plugin
- calls `plugin.build`
- registers the plugin
- app starts the event loop
- event loop waits for `ready` of all registered plugins in the same
order
- returns `true` by default
- then call all `finish` then all `cleanup` in the same order as
registered
- then execute the schedule
In the case of the renderer, to avoid anything async:
- building the renderer plugin creates a detached task that will send
back the initialised renderer through a mutex in a resource
- `ready` will wait for the renderer to be present in the resource
- `finish` will take that renderer and place it in the expected
resources by other plugins
- other plugins (that expect the renderer to be available) `finish` are
called and they are able to set up their pipelines
- `cleanup` is called, only custom one is still for pipeline rendering
### For WebGPU support
- update the `build-wasm-example` script to support passing `--api
webgpu` that will build the example with WebGPU support
- feature for webgl2 was always enabled when building for wasm. it's now
in the default feature list and enabled on all platforms, so check for
this feature must also check that the target_arch is `wasm32`
---
## Migration Guide
- `Plugin::setup` has been renamed `Plugin::cleanup`
- `Plugin::finish` has been added, and plugins adding pipelines should
do it in this function instead of `Plugin::build`
```rust
// Before
impl Plugin for MyPlugin {
fn build(&self, app: &mut App) {
app.insert_resource::<MyResource>
.add_systems(Update, my_system);
let render_app = match app.get_sub_app_mut(RenderApp) {
Ok(render_app) => render_app,
Err(_) => return,
};
render_app
.init_resource::<RenderResourceNeedingDevice>()
.init_resource::<OtherRenderResource>();
}
}
// After
impl Plugin for MyPlugin {
fn build(&self, app: &mut App) {
app.insert_resource::<MyResource>
.add_systems(Update, my_system);
let render_app = match app.get_sub_app_mut(RenderApp) {
Ok(render_app) => render_app,
Err(_) => return,
};
render_app
.init_resource::<OtherRenderResource>();
}
fn finish(&self, app: &mut App) {
let render_app = match app.get_sub_app_mut(RenderApp) {
Ok(render_app) => render_app,
Err(_) => return,
};
render_app
.init_resource::<RenderResourceNeedingDevice>();
}
}
```
2023-05-04 22:07:57 +00:00
|
|
|
#[cfg(any(not(feature = "webgl"), not(target_arch = "wasm32")))]
|
Shader Imports. Decouple Mesh logic from PBR (#3137)
## Shader Imports
This adds "whole file" shader imports. These come in two flavors:
### Asset Path Imports
```rust
// /assets/shaders/custom.wgsl
#import "shaders/custom_material.wgsl"
[[stage(fragment)]]
fn fragment() -> [[location(0)]] vec4<f32> {
return get_color();
}
```
```rust
// /assets/shaders/custom_material.wgsl
[[block]]
struct CustomMaterial {
color: vec4<f32>;
};
[[group(1), binding(0)]]
var<uniform> material: CustomMaterial;
```
### Custom Path Imports
Enables defining custom import paths. These are intended to be used by crates to export shader functionality:
```rust
// bevy_pbr2/src/render/pbr.wgsl
#import bevy_pbr::mesh_view_bind_group
#import bevy_pbr::mesh_bind_group
[[block]]
struct StandardMaterial {
base_color: vec4<f32>;
emissive: vec4<f32>;
perceptual_roughness: f32;
metallic: f32;
reflectance: f32;
flags: u32;
};
/* rest of PBR fragment shader here */
```
```rust
impl Plugin for MeshRenderPlugin {
fn build(&self, app: &mut bevy_app::App) {
let mut shaders = app.world.get_resource_mut::<Assets<Shader>>().unwrap();
shaders.set_untracked(
MESH_BIND_GROUP_HANDLE,
Shader::from_wgsl(include_str!("mesh_bind_group.wgsl"))
.with_import_path("bevy_pbr::mesh_bind_group"),
);
shaders.set_untracked(
MESH_VIEW_BIND_GROUP_HANDLE,
Shader::from_wgsl(include_str!("mesh_view_bind_group.wgsl"))
.with_import_path("bevy_pbr::mesh_view_bind_group"),
);
```
By convention these should use rust-style module paths that start with the crate name. Ultimately we might enforce this convention.
Note that this feature implements _run time_ import resolution. Ultimately we should move the import logic into an asset preprocessor once Bevy gets support for that.
## Decouple Mesh Logic from PBR Logic via MeshRenderPlugin
This breaks out mesh rendering code from PBR material code, which improves the legibility of the code, decouples mesh logic from PBR logic, and opens the door for a future `MaterialPlugin<T: Material>` that handles all of the pipeline setup for arbitrary shader materials.
## Removed `RenderAsset<Shader>` in favor of extracting shaders into RenderPipelineCache
This simplifies the shader import implementation and removes the need to pass around `RenderAssets<Shader>`.
## RenderCommands are now fallible
This allows us to cleanly handle pipelines+shaders not being ready yet. We can abort a render command early in these cases, preventing bevy from trying to bind group / do draw calls for pipelines that couldn't be bound. This could also be used in the future for things like "components not existing on entities yet".
# Next Steps
* Investigate using Naga for "partial typed imports" (ex: `#import bevy_pbr::material::StandardMaterial`, which would import only the StandardMaterial struct)
* Implement `MaterialPlugin<T: Material>` for low-boilerplate custom material shaders
* Move shader import logic into the asset preprocessor once bevy gets support for that.
Fixes #3132
2021-11-18 03:45:02 +00:00
|
|
|
view_dimension: TextureViewDimension::CubeArray,
|
Webgpu support (#8336)
# Objective
- Support WebGPU
- alternative to #5027 that doesn't need any async / await
- fixes #8315
- Surprise fix #7318
## Solution
### For async renderer initialisation
- Update the plugin lifecycle:
- app builds the plugin
- calls `plugin.build`
- registers the plugin
- app starts the event loop
- event loop waits for `ready` of all registered plugins in the same
order
- returns `true` by default
- then call all `finish` then all `cleanup` in the same order as
registered
- then execute the schedule
In the case of the renderer, to avoid anything async:
- building the renderer plugin creates a detached task that will send
back the initialised renderer through a mutex in a resource
- `ready` will wait for the renderer to be present in the resource
- `finish` will take that renderer and place it in the expected
resources by other plugins
- other plugins (that expect the renderer to be available) `finish` are
called and they are able to set up their pipelines
- `cleanup` is called, only custom one is still for pipeline rendering
### For WebGPU support
- update the `build-wasm-example` script to support passing `--api
webgpu` that will build the example with WebGPU support
- feature for webgl2 was always enabled when building for wasm. it's now
in the default feature list and enabled on all platforms, so check for
this feature must also check that the target_arch is `wasm32`
---
## Migration Guide
- `Plugin::setup` has been renamed `Plugin::cleanup`
- `Plugin::finish` has been added, and plugins adding pipelines should
do it in this function instead of `Plugin::build`
```rust
// Before
impl Plugin for MyPlugin {
fn build(&self, app: &mut App) {
app.insert_resource::<MyResource>
.add_systems(Update, my_system);
let render_app = match app.get_sub_app_mut(RenderApp) {
Ok(render_app) => render_app,
Err(_) => return,
};
render_app
.init_resource::<RenderResourceNeedingDevice>()
.init_resource::<OtherRenderResource>();
}
}
// After
impl Plugin for MyPlugin {
fn build(&self, app: &mut App) {
app.insert_resource::<MyResource>
.add_systems(Update, my_system);
let render_app = match app.get_sub_app_mut(RenderApp) {
Ok(render_app) => render_app,
Err(_) => return,
};
render_app
.init_resource::<OtherRenderResource>();
}
fn finish(&self, app: &mut App) {
let render_app = match app.get_sub_app_mut(RenderApp) {
Ok(render_app) => render_app,
Err(_) => return,
};
render_app
.init_resource::<RenderResourceNeedingDevice>();
}
}
```
2023-05-04 22:07:57 +00:00
|
|
|
#[cfg(all(feature = "webgl", target_arch = "wasm32"))]
|
2021-12-22 20:59:48 +00:00
|
|
|
view_dimension: TextureViewDimension::Cube,
|
Shader Imports. Decouple Mesh logic from PBR (#3137)
## Shader Imports
This adds "whole file" shader imports. These come in two flavors:
### Asset Path Imports
```rust
// /assets/shaders/custom.wgsl
#import "shaders/custom_material.wgsl"
[[stage(fragment)]]
fn fragment() -> [[location(0)]] vec4<f32> {
return get_color();
}
```
```rust
// /assets/shaders/custom_material.wgsl
[[block]]
struct CustomMaterial {
color: vec4<f32>;
};
[[group(1), binding(0)]]
var<uniform> material: CustomMaterial;
```
### Custom Path Imports
Enables defining custom import paths. These are intended to be used by crates to export shader functionality:
```rust
// bevy_pbr2/src/render/pbr.wgsl
#import bevy_pbr::mesh_view_bind_group
#import bevy_pbr::mesh_bind_group
[[block]]
struct StandardMaterial {
base_color: vec4<f32>;
emissive: vec4<f32>;
perceptual_roughness: f32;
metallic: f32;
reflectance: f32;
flags: u32;
};
/* rest of PBR fragment shader here */
```
```rust
impl Plugin for MeshRenderPlugin {
fn build(&self, app: &mut bevy_app::App) {
let mut shaders = app.world.get_resource_mut::<Assets<Shader>>().unwrap();
shaders.set_untracked(
MESH_BIND_GROUP_HANDLE,
Shader::from_wgsl(include_str!("mesh_bind_group.wgsl"))
.with_import_path("bevy_pbr::mesh_bind_group"),
);
shaders.set_untracked(
MESH_VIEW_BIND_GROUP_HANDLE,
Shader::from_wgsl(include_str!("mesh_view_bind_group.wgsl"))
.with_import_path("bevy_pbr::mesh_view_bind_group"),
);
```
By convention these should use rust-style module paths that start with the crate name. Ultimately we might enforce this convention.
Note that this feature implements _run time_ import resolution. Ultimately we should move the import logic into an asset preprocessor once Bevy gets support for that.
## Decouple Mesh Logic from PBR Logic via MeshRenderPlugin
This breaks out mesh rendering code from PBR material code, which improves the legibility of the code, decouples mesh logic from PBR logic, and opens the door for a future `MaterialPlugin<T: Material>` that handles all of the pipeline setup for arbitrary shader materials.
## Removed `RenderAsset<Shader>` in favor of extracting shaders into RenderPipelineCache
This simplifies the shader import implementation and removes the need to pass around `RenderAssets<Shader>`.
## RenderCommands are now fallible
This allows us to cleanly handle pipelines+shaders not being ready yet. We can abort a render command early in these cases, preventing bevy from trying to bind group / do draw calls for pipelines that couldn't be bound. This could also be used in the future for things like "components not existing on entities yet".
# Next Steps
* Investigate using Naga for "partial typed imports" (ex: `#import bevy_pbr::material::StandardMaterial`, which would import only the StandardMaterial struct)
* Implement `MaterialPlugin<T: Material>` for low-boilerplate custom material shaders
* Move shader import logic into the asset preprocessor once bevy gets support for that.
Fixes #3132
2021-11-18 03:45:02 +00:00
|
|
|
},
|
|
|
|
count: None,
|
|
|
|
},
|
|
|
|
// Point Shadow Texture Array Sampler
|
|
|
|
BindGroupLayoutEntry {
|
|
|
|
binding: 3,
|
|
|
|
visibility: ShaderStages::FRAGMENT,
|
2021-12-19 03:03:06 +00:00
|
|
|
ty: BindingType::Sampler(SamplerBindingType::Comparison),
|
Shader Imports. Decouple Mesh logic from PBR (#3137)
## Shader Imports
This adds "whole file" shader imports. These come in two flavors:
### Asset Path Imports
```rust
// /assets/shaders/custom.wgsl
#import "shaders/custom_material.wgsl"
[[stage(fragment)]]
fn fragment() -> [[location(0)]] vec4<f32> {
return get_color();
}
```
```rust
// /assets/shaders/custom_material.wgsl
[[block]]
struct CustomMaterial {
color: vec4<f32>;
};
[[group(1), binding(0)]]
var<uniform> material: CustomMaterial;
```
### Custom Path Imports
Enables defining custom import paths. These are intended to be used by crates to export shader functionality:
```rust
// bevy_pbr2/src/render/pbr.wgsl
#import bevy_pbr::mesh_view_bind_group
#import bevy_pbr::mesh_bind_group
[[block]]
struct StandardMaterial {
base_color: vec4<f32>;
emissive: vec4<f32>;
perceptual_roughness: f32;
metallic: f32;
reflectance: f32;
flags: u32;
};
/* rest of PBR fragment shader here */
```
```rust
impl Plugin for MeshRenderPlugin {
fn build(&self, app: &mut bevy_app::App) {
let mut shaders = app.world.get_resource_mut::<Assets<Shader>>().unwrap();
shaders.set_untracked(
MESH_BIND_GROUP_HANDLE,
Shader::from_wgsl(include_str!("mesh_bind_group.wgsl"))
.with_import_path("bevy_pbr::mesh_bind_group"),
);
shaders.set_untracked(
MESH_VIEW_BIND_GROUP_HANDLE,
Shader::from_wgsl(include_str!("mesh_view_bind_group.wgsl"))
.with_import_path("bevy_pbr::mesh_view_bind_group"),
);
```
By convention these should use rust-style module paths that start with the crate name. Ultimately we might enforce this convention.
Note that this feature implements _run time_ import resolution. Ultimately we should move the import logic into an asset preprocessor once Bevy gets support for that.
## Decouple Mesh Logic from PBR Logic via MeshRenderPlugin
This breaks out mesh rendering code from PBR material code, which improves the legibility of the code, decouples mesh logic from PBR logic, and opens the door for a future `MaterialPlugin<T: Material>` that handles all of the pipeline setup for arbitrary shader materials.
## Removed `RenderAsset<Shader>` in favor of extracting shaders into RenderPipelineCache
This simplifies the shader import implementation and removes the need to pass around `RenderAssets<Shader>`.
## RenderCommands are now fallible
This allows us to cleanly handle pipelines+shaders not being ready yet. We can abort a render command early in these cases, preventing bevy from trying to bind group / do draw calls for pipelines that couldn't be bound. This could also be used in the future for things like "components not existing on entities yet".
# Next Steps
* Investigate using Naga for "partial typed imports" (ex: `#import bevy_pbr::material::StandardMaterial`, which would import only the StandardMaterial struct)
* Implement `MaterialPlugin<T: Material>` for low-boilerplate custom material shaders
* Move shader import logic into the asset preprocessor once bevy gets support for that.
Fixes #3132
2021-11-18 03:45:02 +00:00
|
|
|
count: None,
|
|
|
|
},
|
|
|
|
// Directional Shadow Texture Array
|
|
|
|
BindGroupLayoutEntry {
|
|
|
|
binding: 4,
|
|
|
|
visibility: ShaderStages::FRAGMENT,
|
|
|
|
ty: BindingType::Texture {
|
|
|
|
multisampled: false,
|
|
|
|
sample_type: TextureSampleType::Depth,
|
Webgpu support (#8336)
# Objective
- Support WebGPU
- alternative to #5027 that doesn't need any async / await
- fixes #8315
- Surprise fix #7318
## Solution
### For async renderer initialisation
- Update the plugin lifecycle:
- app builds the plugin
- calls `plugin.build`
- registers the plugin
- app starts the event loop
- event loop waits for `ready` of all registered plugins in the same
order
- returns `true` by default
- then call all `finish` then all `cleanup` in the same order as
registered
- then execute the schedule
In the case of the renderer, to avoid anything async:
- building the renderer plugin creates a detached task that will send
back the initialised renderer through a mutex in a resource
- `ready` will wait for the renderer to be present in the resource
- `finish` will take that renderer and place it in the expected
resources by other plugins
- other plugins (that expect the renderer to be available) `finish` are
called and they are able to set up their pipelines
- `cleanup` is called, only custom one is still for pipeline rendering
### For WebGPU support
- update the `build-wasm-example` script to support passing `--api
webgpu` that will build the example with WebGPU support
- feature for webgl2 was always enabled when building for wasm. it's now
in the default feature list and enabled on all platforms, so check for
this feature must also check that the target_arch is `wasm32`
---
## Migration Guide
- `Plugin::setup` has been renamed `Plugin::cleanup`
- `Plugin::finish` has been added, and plugins adding pipelines should
do it in this function instead of `Plugin::build`
```rust
// Before
impl Plugin for MyPlugin {
fn build(&self, app: &mut App) {
app.insert_resource::<MyResource>
.add_systems(Update, my_system);
let render_app = match app.get_sub_app_mut(RenderApp) {
Ok(render_app) => render_app,
Err(_) => return,
};
render_app
.init_resource::<RenderResourceNeedingDevice>()
.init_resource::<OtherRenderResource>();
}
}
// After
impl Plugin for MyPlugin {
fn build(&self, app: &mut App) {
app.insert_resource::<MyResource>
.add_systems(Update, my_system);
let render_app = match app.get_sub_app_mut(RenderApp) {
Ok(render_app) => render_app,
Err(_) => return,
};
render_app
.init_resource::<OtherRenderResource>();
}
fn finish(&self, app: &mut App) {
let render_app = match app.get_sub_app_mut(RenderApp) {
Ok(render_app) => render_app,
Err(_) => return,
};
render_app
.init_resource::<RenderResourceNeedingDevice>();
}
}
```
2023-05-04 22:07:57 +00:00
|
|
|
#[cfg(any(not(feature = "webgl"), not(target_arch = "wasm32")))]
|
Shader Imports. Decouple Mesh logic from PBR (#3137)
## Shader Imports
This adds "whole file" shader imports. These come in two flavors:
### Asset Path Imports
```rust
// /assets/shaders/custom.wgsl
#import "shaders/custom_material.wgsl"
[[stage(fragment)]]
fn fragment() -> [[location(0)]] vec4<f32> {
return get_color();
}
```
```rust
// /assets/shaders/custom_material.wgsl
[[block]]
struct CustomMaterial {
color: vec4<f32>;
};
[[group(1), binding(0)]]
var<uniform> material: CustomMaterial;
```
### Custom Path Imports
Enables defining custom import paths. These are intended to be used by crates to export shader functionality:
```rust
// bevy_pbr2/src/render/pbr.wgsl
#import bevy_pbr::mesh_view_bind_group
#import bevy_pbr::mesh_bind_group
[[block]]
struct StandardMaterial {
base_color: vec4<f32>;
emissive: vec4<f32>;
perceptual_roughness: f32;
metallic: f32;
reflectance: f32;
flags: u32;
};
/* rest of PBR fragment shader here */
```
```rust
impl Plugin for MeshRenderPlugin {
fn build(&self, app: &mut bevy_app::App) {
let mut shaders = app.world.get_resource_mut::<Assets<Shader>>().unwrap();
shaders.set_untracked(
MESH_BIND_GROUP_HANDLE,
Shader::from_wgsl(include_str!("mesh_bind_group.wgsl"))
.with_import_path("bevy_pbr::mesh_bind_group"),
);
shaders.set_untracked(
MESH_VIEW_BIND_GROUP_HANDLE,
Shader::from_wgsl(include_str!("mesh_view_bind_group.wgsl"))
.with_import_path("bevy_pbr::mesh_view_bind_group"),
);
```
By convention these should use rust-style module paths that start with the crate name. Ultimately we might enforce this convention.
Note that this feature implements _run time_ import resolution. Ultimately we should move the import logic into an asset preprocessor once Bevy gets support for that.
## Decouple Mesh Logic from PBR Logic via MeshRenderPlugin
This breaks out mesh rendering code from PBR material code, which improves the legibility of the code, decouples mesh logic from PBR logic, and opens the door for a future `MaterialPlugin<T: Material>` that handles all of the pipeline setup for arbitrary shader materials.
## Removed `RenderAsset<Shader>` in favor of extracting shaders into RenderPipelineCache
This simplifies the shader import implementation and removes the need to pass around `RenderAssets<Shader>`.
## RenderCommands are now fallible
This allows us to cleanly handle pipelines+shaders not being ready yet. We can abort a render command early in these cases, preventing bevy from trying to bind group / do draw calls for pipelines that couldn't be bound. This could also be used in the future for things like "components not existing on entities yet".
# Next Steps
* Investigate using Naga for "partial typed imports" (ex: `#import bevy_pbr::material::StandardMaterial`, which would import only the StandardMaterial struct)
* Implement `MaterialPlugin<T: Material>` for low-boilerplate custom material shaders
* Move shader import logic into the asset preprocessor once bevy gets support for that.
Fixes #3132
2021-11-18 03:45:02 +00:00
|
|
|
view_dimension: TextureViewDimension::D2Array,
|
Webgpu support (#8336)
# Objective
- Support WebGPU
- alternative to #5027 that doesn't need any async / await
- fixes #8315
- Surprise fix #7318
## Solution
### For async renderer initialisation
- Update the plugin lifecycle:
- app builds the plugin
- calls `plugin.build`
- registers the plugin
- app starts the event loop
- event loop waits for `ready` of all registered plugins in the same
order
- returns `true` by default
- then call all `finish` then all `cleanup` in the same order as
registered
- then execute the schedule
In the case of the renderer, to avoid anything async:
- building the renderer plugin creates a detached task that will send
back the initialised renderer through a mutex in a resource
- `ready` will wait for the renderer to be present in the resource
- `finish` will take that renderer and place it in the expected
resources by other plugins
- other plugins (that expect the renderer to be available) `finish` are
called and they are able to set up their pipelines
- `cleanup` is called, only custom one is still for pipeline rendering
### For WebGPU support
- update the `build-wasm-example` script to support passing `--api
webgpu` that will build the example with WebGPU support
- feature for webgl2 was always enabled when building for wasm. it's now
in the default feature list and enabled on all platforms, so check for
this feature must also check that the target_arch is `wasm32`
---
## Migration Guide
- `Plugin::setup` has been renamed `Plugin::cleanup`
- `Plugin::finish` has been added, and plugins adding pipelines should
do it in this function instead of `Plugin::build`
```rust
// Before
impl Plugin for MyPlugin {
fn build(&self, app: &mut App) {
app.insert_resource::<MyResource>
.add_systems(Update, my_system);
let render_app = match app.get_sub_app_mut(RenderApp) {
Ok(render_app) => render_app,
Err(_) => return,
};
render_app
.init_resource::<RenderResourceNeedingDevice>()
.init_resource::<OtherRenderResource>();
}
}
// After
impl Plugin for MyPlugin {
fn build(&self, app: &mut App) {
app.insert_resource::<MyResource>
.add_systems(Update, my_system);
let render_app = match app.get_sub_app_mut(RenderApp) {
Ok(render_app) => render_app,
Err(_) => return,
};
render_app
.init_resource::<OtherRenderResource>();
}
fn finish(&self, app: &mut App) {
let render_app = match app.get_sub_app_mut(RenderApp) {
Ok(render_app) => render_app,
Err(_) => return,
};
render_app
.init_resource::<RenderResourceNeedingDevice>();
}
}
```
2023-05-04 22:07:57 +00:00
|
|
|
#[cfg(all(feature = "webgl", target_arch = "wasm32"))]
|
2021-12-22 20:59:48 +00:00
|
|
|
view_dimension: TextureViewDimension::D2,
|
Shader Imports. Decouple Mesh logic from PBR (#3137)
## Shader Imports
This adds "whole file" shader imports. These come in two flavors:
### Asset Path Imports
```rust
// /assets/shaders/custom.wgsl
#import "shaders/custom_material.wgsl"
[[stage(fragment)]]
fn fragment() -> [[location(0)]] vec4<f32> {
return get_color();
}
```
```rust
// /assets/shaders/custom_material.wgsl
[[block]]
struct CustomMaterial {
color: vec4<f32>;
};
[[group(1), binding(0)]]
var<uniform> material: CustomMaterial;
```
### Custom Path Imports
Enables defining custom import paths. These are intended to be used by crates to export shader functionality:
```rust
// bevy_pbr2/src/render/pbr.wgsl
#import bevy_pbr::mesh_view_bind_group
#import bevy_pbr::mesh_bind_group
[[block]]
struct StandardMaterial {
base_color: vec4<f32>;
emissive: vec4<f32>;
perceptual_roughness: f32;
metallic: f32;
reflectance: f32;
flags: u32;
};
/* rest of PBR fragment shader here */
```
```rust
impl Plugin for MeshRenderPlugin {
fn build(&self, app: &mut bevy_app::App) {
let mut shaders = app.world.get_resource_mut::<Assets<Shader>>().unwrap();
shaders.set_untracked(
MESH_BIND_GROUP_HANDLE,
Shader::from_wgsl(include_str!("mesh_bind_group.wgsl"))
.with_import_path("bevy_pbr::mesh_bind_group"),
);
shaders.set_untracked(
MESH_VIEW_BIND_GROUP_HANDLE,
Shader::from_wgsl(include_str!("mesh_view_bind_group.wgsl"))
.with_import_path("bevy_pbr::mesh_view_bind_group"),
);
```
By convention these should use rust-style module paths that start with the crate name. Ultimately we might enforce this convention.
Note that this feature implements _run time_ import resolution. Ultimately we should move the import logic into an asset preprocessor once Bevy gets support for that.
## Decouple Mesh Logic from PBR Logic via MeshRenderPlugin
This breaks out mesh rendering code from PBR material code, which improves the legibility of the code, decouples mesh logic from PBR logic, and opens the door for a future `MaterialPlugin<T: Material>` that handles all of the pipeline setup for arbitrary shader materials.
## Removed `RenderAsset<Shader>` in favor of extracting shaders into RenderPipelineCache
This simplifies the shader import implementation and removes the need to pass around `RenderAssets<Shader>`.
## RenderCommands are now fallible
This allows us to cleanly handle pipelines+shaders not being ready yet. We can abort a render command early in these cases, preventing bevy from trying to bind group / do draw calls for pipelines that couldn't be bound. This could also be used in the future for things like "components not existing on entities yet".
# Next Steps
* Investigate using Naga for "partial typed imports" (ex: `#import bevy_pbr::material::StandardMaterial`, which would import only the StandardMaterial struct)
* Implement `MaterialPlugin<T: Material>` for low-boilerplate custom material shaders
* Move shader import logic into the asset preprocessor once bevy gets support for that.
Fixes #3132
2021-11-18 03:45:02 +00:00
|
|
|
},
|
|
|
|
count: None,
|
|
|
|
},
|
|
|
|
// Directional Shadow Texture Array Sampler
|
|
|
|
BindGroupLayoutEntry {
|
|
|
|
binding: 5,
|
|
|
|
visibility: ShaderStages::FRAGMENT,
|
2021-12-19 03:03:06 +00:00
|
|
|
ty: BindingType::Sampler(SamplerBindingType::Comparison),
|
Shader Imports. Decouple Mesh logic from PBR (#3137)
## Shader Imports
This adds "whole file" shader imports. These come in two flavors:
### Asset Path Imports
```rust
// /assets/shaders/custom.wgsl
#import "shaders/custom_material.wgsl"
[[stage(fragment)]]
fn fragment() -> [[location(0)]] vec4<f32> {
return get_color();
}
```
```rust
// /assets/shaders/custom_material.wgsl
[[block]]
struct CustomMaterial {
color: vec4<f32>;
};
[[group(1), binding(0)]]
var<uniform> material: CustomMaterial;
```
### Custom Path Imports
Enables defining custom import paths. These are intended to be used by crates to export shader functionality:
```rust
// bevy_pbr2/src/render/pbr.wgsl
#import bevy_pbr::mesh_view_bind_group
#import bevy_pbr::mesh_bind_group
[[block]]
struct StandardMaterial {
base_color: vec4<f32>;
emissive: vec4<f32>;
perceptual_roughness: f32;
metallic: f32;
reflectance: f32;
flags: u32;
};
/* rest of PBR fragment shader here */
```
```rust
impl Plugin for MeshRenderPlugin {
fn build(&self, app: &mut bevy_app::App) {
let mut shaders = app.world.get_resource_mut::<Assets<Shader>>().unwrap();
shaders.set_untracked(
MESH_BIND_GROUP_HANDLE,
Shader::from_wgsl(include_str!("mesh_bind_group.wgsl"))
.with_import_path("bevy_pbr::mesh_bind_group"),
);
shaders.set_untracked(
MESH_VIEW_BIND_GROUP_HANDLE,
Shader::from_wgsl(include_str!("mesh_view_bind_group.wgsl"))
.with_import_path("bevy_pbr::mesh_view_bind_group"),
);
```
By convention these should use rust-style module paths that start with the crate name. Ultimately we might enforce this convention.
Note that this feature implements _run time_ import resolution. Ultimately we should move the import logic into an asset preprocessor once Bevy gets support for that.
## Decouple Mesh Logic from PBR Logic via MeshRenderPlugin
This breaks out mesh rendering code from PBR material code, which improves the legibility of the code, decouples mesh logic from PBR logic, and opens the door for a future `MaterialPlugin<T: Material>` that handles all of the pipeline setup for arbitrary shader materials.
## Removed `RenderAsset<Shader>` in favor of extracting shaders into RenderPipelineCache
This simplifies the shader import implementation and removes the need to pass around `RenderAssets<Shader>`.
## RenderCommands are now fallible
This allows us to cleanly handle pipelines+shaders not being ready yet. We can abort a render command early in these cases, preventing bevy from trying to bind group / do draw calls for pipelines that couldn't be bound. This could also be used in the future for things like "components not existing on entities yet".
# Next Steps
* Investigate using Naga for "partial typed imports" (ex: `#import bevy_pbr::material::StandardMaterial`, which would import only the StandardMaterial struct)
* Implement `MaterialPlugin<T: Material>` for low-boilerplate custom material shaders
* Move shader import logic into the asset preprocessor once bevy gets support for that.
Fixes #3132
2021-11-18 03:45:02 +00:00
|
|
|
count: None,
|
|
|
|
},
|
Clustered forward rendering (#3153)
# Objective
Implement clustered-forward rendering.
## Solution
~~FIXME - in the interest of keeping the merge train moving, I'm submitting this PR now before the description is ready. I want to add in some comments into the code with references for the various bits and pieces and I want to describe some of the key decisions I made here. I'll do that as soon as I can.~~ Anyone reviewing is welcome to add review comments where you want to know more about how something or other works.
* The summary of the technique is that the view frustum is divided into a grid of sub-volumes called clusters, point lights are tested against each of the clusters to see if they would affect that volume within the scene and if so, added to a list of lights affecting that cluster. Then when shading a fragment which is a point on the surface of a mesh within the scene, the point is mapped to a cluster and only the lights affecting that clusters are used in lighting calculations. This brings huge performance and scalability benefits as most of the time lights are placed so that there are not that many that overlap each other in terms of their sphere of influence, but there may be many distinct point lights visible in the scene. Doing all the lighting calculations for all visible lights in the scene for every pixel on the screen quickly becomes a performance limitation. Clustered forward rendering allows us to make an approximate list of lights that affect each pixel, indeed each surface in the scene (as it works along the view z axis too, unlike tiled/forward+).
* WebGL2 is a platform we want to support and it does not support storage buffers. Uniform buffer bindings are limited to a maximum of 16384 bytes per binding. I used bit shifting and masking to pack the cluster light lists and various indices into a uniform buffer and the 16kB limit is very likely the first bottleneck in scaling the number of lights in a scene at the moment if the lights can affect many clusters due to their range or proximity to the camera (there are a lot of clusters close to the camera, which is an area for improvement). We could store the information in textures instead of uniform buffers to remove this bottleneck though I don’t know if there are performance implications to reading from textures instead if uniform buffers.
* Because of the uniform buffer binding size limitations we can support a maximum of 256 lights with the current size of the PointLight struct
* The z-slicing method (i.e. the mapping from view space z to a depth slice which defines the near and far planes of a cluster) is using the Doom 2016 method. I need to add comments with references to this. It’s an exponential function that simplifies well for the purposes of optimising the fragment shader. xy grid divisions are regular in screen space.
* Some optimisation work was done on the allocation of lights to clusters, which involves intersection tests, and for this number of clusters and lights the system has insignificant cost using a fairly naïve algorithm. I think for more lights / finer-grained clusters we could use a BVH, but at some point it would be just much better to use compute shaders and storage buffers.
* Something else to note is that it is absolutely infeasible to use plain cube map point light shadow mapping for many lights. It does not scale in terms of performance nor memory usage. There are some interesting methods I saw discussed in reference material that I will add a link to which render and update shadow maps piece-wise, but they also need compute shaders to work well. Basically for now you need to sacrifice point light shadows for all but a handful of point lights if you don’t want to kill performance. I set the limit to 10 but that’s just what we had from before where 10 was the maximum number of point lights before this PR.
* I added a couple of debug visualisations behind a shader def that were useful for seeing performance impact of light distribution - I should make the debug mode configurable without modifying the shader code. One mode shows the number of lights affecting each cluster by tinting toward red for few lights or green for many lights (maxes out at 16, but not sure that’s a reasonable max). The other shows which cluster the surface at a fragment belongs to by tinting it with a randomish colour. This can help to understand deeper performance issues due to screen space tiles spanning multiple clusters in depth with divergent shader execution times.
Also, there are more things that could be done as improvements, and I will document those somewhere (I'm not sure where will be the best place... in a todo alongside the code, a GitHub issue, somewhere else?) but I think it works well enough and brings significant performance and scalability benefits that it's worth integrating already now and then iterating on.
* Calculate the light’s effective range based on its intensity and physical falloff and either just use this, or take the minimum of the user-supplied range and this. This would avoid unnecessary lighting calculations for clusters that cannot be affected. This would need to take into account HDR tone mapping as in my not-fully-understanding-the-details understanding, the threshold is relative to how bright the scene is.
* Improve the z-slicing to use a larger first slice.
* More gracefully handle the cluster light list uniform buffer binding size limitations by prioritising which lights are included (some heuristic for most significant like closest to the camera, brightest, affecting the most pixels, …)
* Switch to using a texture instead of uniform buffer
* Figure out the / a better story for shadows
I will also probably add an example that demonstrates some of the issues:
* What situations exhaust the space available in the uniform buffers
* Light range too large making lights affect many clusters and so exhausting the space for the lists of lights that affect clusters
* Light range set to be too small producing visible artifacts where clusters the light would physically affect are not affected by the light
* Perhaps some performance issues
* How many lights can be closely packed or affect large portions of the view before performance drops?
2021-12-09 03:08:54 +00:00
|
|
|
// PointLights
|
|
|
|
BindGroupLayoutEntry {
|
|
|
|
binding: 6,
|
|
|
|
visibility: ShaderStages::FRAGMENT,
|
|
|
|
ty: BindingType::Buffer {
|
2022-04-07 16:16:35 +00:00
|
|
|
ty: clustered_forward_buffer_binding_type,
|
Clustered forward rendering (#3153)
# Objective
Implement clustered-forward rendering.
## Solution
~~FIXME - in the interest of keeping the merge train moving, I'm submitting this PR now before the description is ready. I want to add in some comments into the code with references for the various bits and pieces and I want to describe some of the key decisions I made here. I'll do that as soon as I can.~~ Anyone reviewing is welcome to add review comments where you want to know more about how something or other works.
* The summary of the technique is that the view frustum is divided into a grid of sub-volumes called clusters, point lights are tested against each of the clusters to see if they would affect that volume within the scene and if so, added to a list of lights affecting that cluster. Then when shading a fragment which is a point on the surface of a mesh within the scene, the point is mapped to a cluster and only the lights affecting that clusters are used in lighting calculations. This brings huge performance and scalability benefits as most of the time lights are placed so that there are not that many that overlap each other in terms of their sphere of influence, but there may be many distinct point lights visible in the scene. Doing all the lighting calculations for all visible lights in the scene for every pixel on the screen quickly becomes a performance limitation. Clustered forward rendering allows us to make an approximate list of lights that affect each pixel, indeed each surface in the scene (as it works along the view z axis too, unlike tiled/forward+).
* WebGL2 is a platform we want to support and it does not support storage buffers. Uniform buffer bindings are limited to a maximum of 16384 bytes per binding. I used bit shifting and masking to pack the cluster light lists and various indices into a uniform buffer and the 16kB limit is very likely the first bottleneck in scaling the number of lights in a scene at the moment if the lights can affect many clusters due to their range or proximity to the camera (there are a lot of clusters close to the camera, which is an area for improvement). We could store the information in textures instead of uniform buffers to remove this bottleneck though I don’t know if there are performance implications to reading from textures instead if uniform buffers.
* Because of the uniform buffer binding size limitations we can support a maximum of 256 lights with the current size of the PointLight struct
* The z-slicing method (i.e. the mapping from view space z to a depth slice which defines the near and far planes of a cluster) is using the Doom 2016 method. I need to add comments with references to this. It’s an exponential function that simplifies well for the purposes of optimising the fragment shader. xy grid divisions are regular in screen space.
* Some optimisation work was done on the allocation of lights to clusters, which involves intersection tests, and for this number of clusters and lights the system has insignificant cost using a fairly naïve algorithm. I think for more lights / finer-grained clusters we could use a BVH, but at some point it would be just much better to use compute shaders and storage buffers.
* Something else to note is that it is absolutely infeasible to use plain cube map point light shadow mapping for many lights. It does not scale in terms of performance nor memory usage. There are some interesting methods I saw discussed in reference material that I will add a link to which render and update shadow maps piece-wise, but they also need compute shaders to work well. Basically for now you need to sacrifice point light shadows for all but a handful of point lights if you don’t want to kill performance. I set the limit to 10 but that’s just what we had from before where 10 was the maximum number of point lights before this PR.
* I added a couple of debug visualisations behind a shader def that were useful for seeing performance impact of light distribution - I should make the debug mode configurable without modifying the shader code. One mode shows the number of lights affecting each cluster by tinting toward red for few lights or green for many lights (maxes out at 16, but not sure that’s a reasonable max). The other shows which cluster the surface at a fragment belongs to by tinting it with a randomish colour. This can help to understand deeper performance issues due to screen space tiles spanning multiple clusters in depth with divergent shader execution times.
Also, there are more things that could be done as improvements, and I will document those somewhere (I'm not sure where will be the best place... in a todo alongside the code, a GitHub issue, somewhere else?) but I think it works well enough and brings significant performance and scalability benefits that it's worth integrating already now and then iterating on.
* Calculate the light’s effective range based on its intensity and physical falloff and either just use this, or take the minimum of the user-supplied range and this. This would avoid unnecessary lighting calculations for clusters that cannot be affected. This would need to take into account HDR tone mapping as in my not-fully-understanding-the-details understanding, the threshold is relative to how bright the scene is.
* Improve the z-slicing to use a larger first slice.
* More gracefully handle the cluster light list uniform buffer binding size limitations by prioritising which lights are included (some heuristic for most significant like closest to the camera, brightest, affecting the most pixels, …)
* Switch to using a texture instead of uniform buffer
* Figure out the / a better story for shadows
I will also probably add an example that demonstrates some of the issues:
* What situations exhaust the space available in the uniform buffers
* Light range too large making lights affect many clusters and so exhausting the space for the lists of lights that affect clusters
* Light range set to be too small producing visible artifacts where clusters the light would physically affect are not affected by the light
* Perhaps some performance issues
* How many lights can be closely packed or affect large portions of the view before performance drops?
2021-12-09 03:08:54 +00:00
|
|
|
has_dynamic_offset: false,
|
Migrate to encase from crevice (#4339)
# Objective
- Unify buffer APIs
- Also see #4272
## Solution
- Replace vendored `crevice` with `encase`
---
## Changelog
Changed `StorageBuffer`
Added `DynamicStorageBuffer`
Replaced `UniformVec` with `UniformBuffer`
Replaced `DynamicUniformVec` with `DynamicUniformBuffer`
## Migration Guide
### `StorageBuffer`
removed `set_body()`, `values()`, `values_mut()`, `clear()`, `push()`, `append()`
added `set()`, `get()`, `get_mut()`
### `UniformVec` -> `UniformBuffer`
renamed `uniform_buffer()` to `buffer()`
removed `len()`, `is_empty()`, `capacity()`, `push()`, `reserve()`, `clear()`, `values()`
added `set()`, `get()`
### `DynamicUniformVec` -> `DynamicUniformBuffer`
renamed `uniform_buffer()` to `buffer()`
removed `capacity()`, `reserve()`
Co-authored-by: Carter Anderson <mcanders1@gmail.com>
2022-05-18 21:09:21 +00:00
|
|
|
min_binding_size: Some(GpuPointLights::min_size(
|
|
|
|
clustered_forward_buffer_binding_type,
|
|
|
|
)),
|
Clustered forward rendering (#3153)
# Objective
Implement clustered-forward rendering.
## Solution
~~FIXME - in the interest of keeping the merge train moving, I'm submitting this PR now before the description is ready. I want to add in some comments into the code with references for the various bits and pieces and I want to describe some of the key decisions I made here. I'll do that as soon as I can.~~ Anyone reviewing is welcome to add review comments where you want to know more about how something or other works.
* The summary of the technique is that the view frustum is divided into a grid of sub-volumes called clusters, point lights are tested against each of the clusters to see if they would affect that volume within the scene and if so, added to a list of lights affecting that cluster. Then when shading a fragment which is a point on the surface of a mesh within the scene, the point is mapped to a cluster and only the lights affecting that clusters are used in lighting calculations. This brings huge performance and scalability benefits as most of the time lights are placed so that there are not that many that overlap each other in terms of their sphere of influence, but there may be many distinct point lights visible in the scene. Doing all the lighting calculations for all visible lights in the scene for every pixel on the screen quickly becomes a performance limitation. Clustered forward rendering allows us to make an approximate list of lights that affect each pixel, indeed each surface in the scene (as it works along the view z axis too, unlike tiled/forward+).
* WebGL2 is a platform we want to support and it does not support storage buffers. Uniform buffer bindings are limited to a maximum of 16384 bytes per binding. I used bit shifting and masking to pack the cluster light lists and various indices into a uniform buffer and the 16kB limit is very likely the first bottleneck in scaling the number of lights in a scene at the moment if the lights can affect many clusters due to their range or proximity to the camera (there are a lot of clusters close to the camera, which is an area for improvement). We could store the information in textures instead of uniform buffers to remove this bottleneck though I don’t know if there are performance implications to reading from textures instead if uniform buffers.
* Because of the uniform buffer binding size limitations we can support a maximum of 256 lights with the current size of the PointLight struct
* The z-slicing method (i.e. the mapping from view space z to a depth slice which defines the near and far planes of a cluster) is using the Doom 2016 method. I need to add comments with references to this. It’s an exponential function that simplifies well for the purposes of optimising the fragment shader. xy grid divisions are regular in screen space.
* Some optimisation work was done on the allocation of lights to clusters, which involves intersection tests, and for this number of clusters and lights the system has insignificant cost using a fairly naïve algorithm. I think for more lights / finer-grained clusters we could use a BVH, but at some point it would be just much better to use compute shaders and storage buffers.
* Something else to note is that it is absolutely infeasible to use plain cube map point light shadow mapping for many lights. It does not scale in terms of performance nor memory usage. There are some interesting methods I saw discussed in reference material that I will add a link to which render and update shadow maps piece-wise, but they also need compute shaders to work well. Basically for now you need to sacrifice point light shadows for all but a handful of point lights if you don’t want to kill performance. I set the limit to 10 but that’s just what we had from before where 10 was the maximum number of point lights before this PR.
* I added a couple of debug visualisations behind a shader def that were useful for seeing performance impact of light distribution - I should make the debug mode configurable without modifying the shader code. One mode shows the number of lights affecting each cluster by tinting toward red for few lights or green for many lights (maxes out at 16, but not sure that’s a reasonable max). The other shows which cluster the surface at a fragment belongs to by tinting it with a randomish colour. This can help to understand deeper performance issues due to screen space tiles spanning multiple clusters in depth with divergent shader execution times.
Also, there are more things that could be done as improvements, and I will document those somewhere (I'm not sure where will be the best place... in a todo alongside the code, a GitHub issue, somewhere else?) but I think it works well enough and brings significant performance and scalability benefits that it's worth integrating already now and then iterating on.
* Calculate the light’s effective range based on its intensity and physical falloff and either just use this, or take the minimum of the user-supplied range and this. This would avoid unnecessary lighting calculations for clusters that cannot be affected. This would need to take into account HDR tone mapping as in my not-fully-understanding-the-details understanding, the threshold is relative to how bright the scene is.
* Improve the z-slicing to use a larger first slice.
* More gracefully handle the cluster light list uniform buffer binding size limitations by prioritising which lights are included (some heuristic for most significant like closest to the camera, brightest, affecting the most pixels, …)
* Switch to using a texture instead of uniform buffer
* Figure out the / a better story for shadows
I will also probably add an example that demonstrates some of the issues:
* What situations exhaust the space available in the uniform buffers
* Light range too large making lights affect many clusters and so exhausting the space for the lists of lights that affect clusters
* Light range set to be too small producing visible artifacts where clusters the light would physically affect are not affected by the light
* Perhaps some performance issues
* How many lights can be closely packed or affect large portions of the view before performance drops?
2021-12-09 03:08:54 +00:00
|
|
|
},
|
|
|
|
count: None,
|
|
|
|
},
|
|
|
|
// ClusteredLightIndexLists
|
|
|
|
BindGroupLayoutEntry {
|
|
|
|
binding: 7,
|
|
|
|
visibility: ShaderStages::FRAGMENT,
|
|
|
|
ty: BindingType::Buffer {
|
2022-04-07 16:16:35 +00:00
|
|
|
ty: clustered_forward_buffer_binding_type,
|
Clustered forward rendering (#3153)
# Objective
Implement clustered-forward rendering.
## Solution
~~FIXME - in the interest of keeping the merge train moving, I'm submitting this PR now before the description is ready. I want to add in some comments into the code with references for the various bits and pieces and I want to describe some of the key decisions I made here. I'll do that as soon as I can.~~ Anyone reviewing is welcome to add review comments where you want to know more about how something or other works.
* The summary of the technique is that the view frustum is divided into a grid of sub-volumes called clusters, point lights are tested against each of the clusters to see if they would affect that volume within the scene and if so, added to a list of lights affecting that cluster. Then when shading a fragment which is a point on the surface of a mesh within the scene, the point is mapped to a cluster and only the lights affecting that clusters are used in lighting calculations. This brings huge performance and scalability benefits as most of the time lights are placed so that there are not that many that overlap each other in terms of their sphere of influence, but there may be many distinct point lights visible in the scene. Doing all the lighting calculations for all visible lights in the scene for every pixel on the screen quickly becomes a performance limitation. Clustered forward rendering allows us to make an approximate list of lights that affect each pixel, indeed each surface in the scene (as it works along the view z axis too, unlike tiled/forward+).
* WebGL2 is a platform we want to support and it does not support storage buffers. Uniform buffer bindings are limited to a maximum of 16384 bytes per binding. I used bit shifting and masking to pack the cluster light lists and various indices into a uniform buffer and the 16kB limit is very likely the first bottleneck in scaling the number of lights in a scene at the moment if the lights can affect many clusters due to their range or proximity to the camera (there are a lot of clusters close to the camera, which is an area for improvement). We could store the information in textures instead of uniform buffers to remove this bottleneck though I don’t know if there are performance implications to reading from textures instead if uniform buffers.
* Because of the uniform buffer binding size limitations we can support a maximum of 256 lights with the current size of the PointLight struct
* The z-slicing method (i.e. the mapping from view space z to a depth slice which defines the near and far planes of a cluster) is using the Doom 2016 method. I need to add comments with references to this. It’s an exponential function that simplifies well for the purposes of optimising the fragment shader. xy grid divisions are regular in screen space.
* Some optimisation work was done on the allocation of lights to clusters, which involves intersection tests, and for this number of clusters and lights the system has insignificant cost using a fairly naïve algorithm. I think for more lights / finer-grained clusters we could use a BVH, but at some point it would be just much better to use compute shaders and storage buffers.
* Something else to note is that it is absolutely infeasible to use plain cube map point light shadow mapping for many lights. It does not scale in terms of performance nor memory usage. There are some interesting methods I saw discussed in reference material that I will add a link to which render and update shadow maps piece-wise, but they also need compute shaders to work well. Basically for now you need to sacrifice point light shadows for all but a handful of point lights if you don’t want to kill performance. I set the limit to 10 but that’s just what we had from before where 10 was the maximum number of point lights before this PR.
* I added a couple of debug visualisations behind a shader def that were useful for seeing performance impact of light distribution - I should make the debug mode configurable without modifying the shader code. One mode shows the number of lights affecting each cluster by tinting toward red for few lights or green for many lights (maxes out at 16, but not sure that’s a reasonable max). The other shows which cluster the surface at a fragment belongs to by tinting it with a randomish colour. This can help to understand deeper performance issues due to screen space tiles spanning multiple clusters in depth with divergent shader execution times.
Also, there are more things that could be done as improvements, and I will document those somewhere (I'm not sure where will be the best place... in a todo alongside the code, a GitHub issue, somewhere else?) but I think it works well enough and brings significant performance and scalability benefits that it's worth integrating already now and then iterating on.
* Calculate the light’s effective range based on its intensity and physical falloff and either just use this, or take the minimum of the user-supplied range and this. This would avoid unnecessary lighting calculations for clusters that cannot be affected. This would need to take into account HDR tone mapping as in my not-fully-understanding-the-details understanding, the threshold is relative to how bright the scene is.
* Improve the z-slicing to use a larger first slice.
* More gracefully handle the cluster light list uniform buffer binding size limitations by prioritising which lights are included (some heuristic for most significant like closest to the camera, brightest, affecting the most pixels, …)
* Switch to using a texture instead of uniform buffer
* Figure out the / a better story for shadows
I will also probably add an example that demonstrates some of the issues:
* What situations exhaust the space available in the uniform buffers
* Light range too large making lights affect many clusters and so exhausting the space for the lists of lights that affect clusters
* Light range set to be too small producing visible artifacts where clusters the light would physically affect are not affected by the light
* Perhaps some performance issues
* How many lights can be closely packed or affect large portions of the view before performance drops?
2021-12-09 03:08:54 +00:00
|
|
|
has_dynamic_offset: false,
|
Migrate to encase from crevice (#4339)
# Objective
- Unify buffer APIs
- Also see #4272
## Solution
- Replace vendored `crevice` with `encase`
---
## Changelog
Changed `StorageBuffer`
Added `DynamicStorageBuffer`
Replaced `UniformVec` with `UniformBuffer`
Replaced `DynamicUniformVec` with `DynamicUniformBuffer`
## Migration Guide
### `StorageBuffer`
removed `set_body()`, `values()`, `values_mut()`, `clear()`, `push()`, `append()`
added `set()`, `get()`, `get_mut()`
### `UniformVec` -> `UniformBuffer`
renamed `uniform_buffer()` to `buffer()`
removed `len()`, `is_empty()`, `capacity()`, `push()`, `reserve()`, `clear()`, `values()`
added `set()`, `get()`
### `DynamicUniformVec` -> `DynamicUniformBuffer`
renamed `uniform_buffer()` to `buffer()`
removed `capacity()`, `reserve()`
Co-authored-by: Carter Anderson <mcanders1@gmail.com>
2022-05-18 21:09:21 +00:00
|
|
|
min_binding_size: Some(
|
|
|
|
ViewClusterBindings::min_size_cluster_light_index_lists(
|
|
|
|
clustered_forward_buffer_binding_type,
|
|
|
|
),
|
|
|
|
),
|
Clustered forward rendering (#3153)
# Objective
Implement clustered-forward rendering.
## Solution
~~FIXME - in the interest of keeping the merge train moving, I'm submitting this PR now before the description is ready. I want to add in some comments into the code with references for the various bits and pieces and I want to describe some of the key decisions I made here. I'll do that as soon as I can.~~ Anyone reviewing is welcome to add review comments where you want to know more about how something or other works.
* The summary of the technique is that the view frustum is divided into a grid of sub-volumes called clusters, point lights are tested against each of the clusters to see if they would affect that volume within the scene and if so, added to a list of lights affecting that cluster. Then when shading a fragment which is a point on the surface of a mesh within the scene, the point is mapped to a cluster and only the lights affecting that clusters are used in lighting calculations. This brings huge performance and scalability benefits as most of the time lights are placed so that there are not that many that overlap each other in terms of their sphere of influence, but there may be many distinct point lights visible in the scene. Doing all the lighting calculations for all visible lights in the scene for every pixel on the screen quickly becomes a performance limitation. Clustered forward rendering allows us to make an approximate list of lights that affect each pixel, indeed each surface in the scene (as it works along the view z axis too, unlike tiled/forward+).
* WebGL2 is a platform we want to support and it does not support storage buffers. Uniform buffer bindings are limited to a maximum of 16384 bytes per binding. I used bit shifting and masking to pack the cluster light lists and various indices into a uniform buffer and the 16kB limit is very likely the first bottleneck in scaling the number of lights in a scene at the moment if the lights can affect many clusters due to their range or proximity to the camera (there are a lot of clusters close to the camera, which is an area for improvement). We could store the information in textures instead of uniform buffers to remove this bottleneck though I don’t know if there are performance implications to reading from textures instead if uniform buffers.
* Because of the uniform buffer binding size limitations we can support a maximum of 256 lights with the current size of the PointLight struct
* The z-slicing method (i.e. the mapping from view space z to a depth slice which defines the near and far planes of a cluster) is using the Doom 2016 method. I need to add comments with references to this. It’s an exponential function that simplifies well for the purposes of optimising the fragment shader. xy grid divisions are regular in screen space.
* Some optimisation work was done on the allocation of lights to clusters, which involves intersection tests, and for this number of clusters and lights the system has insignificant cost using a fairly naïve algorithm. I think for more lights / finer-grained clusters we could use a BVH, but at some point it would be just much better to use compute shaders and storage buffers.
* Something else to note is that it is absolutely infeasible to use plain cube map point light shadow mapping for many lights. It does not scale in terms of performance nor memory usage. There are some interesting methods I saw discussed in reference material that I will add a link to which render and update shadow maps piece-wise, but they also need compute shaders to work well. Basically for now you need to sacrifice point light shadows for all but a handful of point lights if you don’t want to kill performance. I set the limit to 10 but that’s just what we had from before where 10 was the maximum number of point lights before this PR.
* I added a couple of debug visualisations behind a shader def that were useful for seeing performance impact of light distribution - I should make the debug mode configurable without modifying the shader code. One mode shows the number of lights affecting each cluster by tinting toward red for few lights or green for many lights (maxes out at 16, but not sure that’s a reasonable max). The other shows which cluster the surface at a fragment belongs to by tinting it with a randomish colour. This can help to understand deeper performance issues due to screen space tiles spanning multiple clusters in depth with divergent shader execution times.
Also, there are more things that could be done as improvements, and I will document those somewhere (I'm not sure where will be the best place... in a todo alongside the code, a GitHub issue, somewhere else?) but I think it works well enough and brings significant performance and scalability benefits that it's worth integrating already now and then iterating on.
* Calculate the light’s effective range based on its intensity and physical falloff and either just use this, or take the minimum of the user-supplied range and this. This would avoid unnecessary lighting calculations for clusters that cannot be affected. This would need to take into account HDR tone mapping as in my not-fully-understanding-the-details understanding, the threshold is relative to how bright the scene is.
* Improve the z-slicing to use a larger first slice.
* More gracefully handle the cluster light list uniform buffer binding size limitations by prioritising which lights are included (some heuristic for most significant like closest to the camera, brightest, affecting the most pixels, …)
* Switch to using a texture instead of uniform buffer
* Figure out the / a better story for shadows
I will also probably add an example that demonstrates some of the issues:
* What situations exhaust the space available in the uniform buffers
* Light range too large making lights affect many clusters and so exhausting the space for the lists of lights that affect clusters
* Light range set to be too small producing visible artifacts where clusters the light would physically affect are not affected by the light
* Perhaps some performance issues
* How many lights can be closely packed or affect large portions of the view before performance drops?
2021-12-09 03:08:54 +00:00
|
|
|
},
|
|
|
|
count: None,
|
|
|
|
},
|
|
|
|
// ClusterOffsetsAndCounts
|
|
|
|
BindGroupLayoutEntry {
|
|
|
|
binding: 8,
|
|
|
|
visibility: ShaderStages::FRAGMENT,
|
|
|
|
ty: BindingType::Buffer {
|
2022-04-07 16:16:35 +00:00
|
|
|
ty: clustered_forward_buffer_binding_type,
|
Clustered forward rendering (#3153)
# Objective
Implement clustered-forward rendering.
## Solution
~~FIXME - in the interest of keeping the merge train moving, I'm submitting this PR now before the description is ready. I want to add in some comments into the code with references for the various bits and pieces and I want to describe some of the key decisions I made here. I'll do that as soon as I can.~~ Anyone reviewing is welcome to add review comments where you want to know more about how something or other works.
* The summary of the technique is that the view frustum is divided into a grid of sub-volumes called clusters, point lights are tested against each of the clusters to see if they would affect that volume within the scene and if so, added to a list of lights affecting that cluster. Then when shading a fragment which is a point on the surface of a mesh within the scene, the point is mapped to a cluster and only the lights affecting that clusters are used in lighting calculations. This brings huge performance and scalability benefits as most of the time lights are placed so that there are not that many that overlap each other in terms of their sphere of influence, but there may be many distinct point lights visible in the scene. Doing all the lighting calculations for all visible lights in the scene for every pixel on the screen quickly becomes a performance limitation. Clustered forward rendering allows us to make an approximate list of lights that affect each pixel, indeed each surface in the scene (as it works along the view z axis too, unlike tiled/forward+).
* WebGL2 is a platform we want to support and it does not support storage buffers. Uniform buffer bindings are limited to a maximum of 16384 bytes per binding. I used bit shifting and masking to pack the cluster light lists and various indices into a uniform buffer and the 16kB limit is very likely the first bottleneck in scaling the number of lights in a scene at the moment if the lights can affect many clusters due to their range or proximity to the camera (there are a lot of clusters close to the camera, which is an area for improvement). We could store the information in textures instead of uniform buffers to remove this bottleneck though I don’t know if there are performance implications to reading from textures instead if uniform buffers.
* Because of the uniform buffer binding size limitations we can support a maximum of 256 lights with the current size of the PointLight struct
* The z-slicing method (i.e. the mapping from view space z to a depth slice which defines the near and far planes of a cluster) is using the Doom 2016 method. I need to add comments with references to this. It’s an exponential function that simplifies well for the purposes of optimising the fragment shader. xy grid divisions are regular in screen space.
* Some optimisation work was done on the allocation of lights to clusters, which involves intersection tests, and for this number of clusters and lights the system has insignificant cost using a fairly naïve algorithm. I think for more lights / finer-grained clusters we could use a BVH, but at some point it would be just much better to use compute shaders and storage buffers.
* Something else to note is that it is absolutely infeasible to use plain cube map point light shadow mapping for many lights. It does not scale in terms of performance nor memory usage. There are some interesting methods I saw discussed in reference material that I will add a link to which render and update shadow maps piece-wise, but they also need compute shaders to work well. Basically for now you need to sacrifice point light shadows for all but a handful of point lights if you don’t want to kill performance. I set the limit to 10 but that’s just what we had from before where 10 was the maximum number of point lights before this PR.
* I added a couple of debug visualisations behind a shader def that were useful for seeing performance impact of light distribution - I should make the debug mode configurable without modifying the shader code. One mode shows the number of lights affecting each cluster by tinting toward red for few lights or green for many lights (maxes out at 16, but not sure that’s a reasonable max). The other shows which cluster the surface at a fragment belongs to by tinting it with a randomish colour. This can help to understand deeper performance issues due to screen space tiles spanning multiple clusters in depth with divergent shader execution times.
Also, there are more things that could be done as improvements, and I will document those somewhere (I'm not sure where will be the best place... in a todo alongside the code, a GitHub issue, somewhere else?) but I think it works well enough and brings significant performance and scalability benefits that it's worth integrating already now and then iterating on.
* Calculate the light’s effective range based on its intensity and physical falloff and either just use this, or take the minimum of the user-supplied range and this. This would avoid unnecessary lighting calculations for clusters that cannot be affected. This would need to take into account HDR tone mapping as in my not-fully-understanding-the-details understanding, the threshold is relative to how bright the scene is.
* Improve the z-slicing to use a larger first slice.
* More gracefully handle the cluster light list uniform buffer binding size limitations by prioritising which lights are included (some heuristic for most significant like closest to the camera, brightest, affecting the most pixels, …)
* Switch to using a texture instead of uniform buffer
* Figure out the / a better story for shadows
I will also probably add an example that demonstrates some of the issues:
* What situations exhaust the space available in the uniform buffers
* Light range too large making lights affect many clusters and so exhausting the space for the lists of lights that affect clusters
* Light range set to be too small producing visible artifacts where clusters the light would physically affect are not affected by the light
* Perhaps some performance issues
* How many lights can be closely packed or affect large portions of the view before performance drops?
2021-12-09 03:08:54 +00:00
|
|
|
has_dynamic_offset: false,
|
Migrate to encase from crevice (#4339)
# Objective
- Unify buffer APIs
- Also see #4272
## Solution
- Replace vendored `crevice` with `encase`
---
## Changelog
Changed `StorageBuffer`
Added `DynamicStorageBuffer`
Replaced `UniformVec` with `UniformBuffer`
Replaced `DynamicUniformVec` with `DynamicUniformBuffer`
## Migration Guide
### `StorageBuffer`
removed `set_body()`, `values()`, `values_mut()`, `clear()`, `push()`, `append()`
added `set()`, `get()`, `get_mut()`
### `UniformVec` -> `UniformBuffer`
renamed `uniform_buffer()` to `buffer()`
removed `len()`, `is_empty()`, `capacity()`, `push()`, `reserve()`, `clear()`, `values()`
added `set()`, `get()`
### `DynamicUniformVec` -> `DynamicUniformBuffer`
renamed `uniform_buffer()` to `buffer()`
removed `capacity()`, `reserve()`
Co-authored-by: Carter Anderson <mcanders1@gmail.com>
2022-05-18 21:09:21 +00:00
|
|
|
min_binding_size: Some(
|
|
|
|
ViewClusterBindings::min_size_cluster_offsets_and_counts(
|
|
|
|
clustered_forward_buffer_binding_type,
|
|
|
|
),
|
|
|
|
),
|
Clustered forward rendering (#3153)
# Objective
Implement clustered-forward rendering.
## Solution
~~FIXME - in the interest of keeping the merge train moving, I'm submitting this PR now before the description is ready. I want to add in some comments into the code with references for the various bits and pieces and I want to describe some of the key decisions I made here. I'll do that as soon as I can.~~ Anyone reviewing is welcome to add review comments where you want to know more about how something or other works.
* The summary of the technique is that the view frustum is divided into a grid of sub-volumes called clusters, point lights are tested against each of the clusters to see if they would affect that volume within the scene and if so, added to a list of lights affecting that cluster. Then when shading a fragment which is a point on the surface of a mesh within the scene, the point is mapped to a cluster and only the lights affecting that clusters are used in lighting calculations. This brings huge performance and scalability benefits as most of the time lights are placed so that there are not that many that overlap each other in terms of their sphere of influence, but there may be many distinct point lights visible in the scene. Doing all the lighting calculations for all visible lights in the scene for every pixel on the screen quickly becomes a performance limitation. Clustered forward rendering allows us to make an approximate list of lights that affect each pixel, indeed each surface in the scene (as it works along the view z axis too, unlike tiled/forward+).
* WebGL2 is a platform we want to support and it does not support storage buffers. Uniform buffer bindings are limited to a maximum of 16384 bytes per binding. I used bit shifting and masking to pack the cluster light lists and various indices into a uniform buffer and the 16kB limit is very likely the first bottleneck in scaling the number of lights in a scene at the moment if the lights can affect many clusters due to their range or proximity to the camera (there are a lot of clusters close to the camera, which is an area for improvement). We could store the information in textures instead of uniform buffers to remove this bottleneck though I don’t know if there are performance implications to reading from textures instead if uniform buffers.
* Because of the uniform buffer binding size limitations we can support a maximum of 256 lights with the current size of the PointLight struct
* The z-slicing method (i.e. the mapping from view space z to a depth slice which defines the near and far planes of a cluster) is using the Doom 2016 method. I need to add comments with references to this. It’s an exponential function that simplifies well for the purposes of optimising the fragment shader. xy grid divisions are regular in screen space.
* Some optimisation work was done on the allocation of lights to clusters, which involves intersection tests, and for this number of clusters and lights the system has insignificant cost using a fairly naïve algorithm. I think for more lights / finer-grained clusters we could use a BVH, but at some point it would be just much better to use compute shaders and storage buffers.
* Something else to note is that it is absolutely infeasible to use plain cube map point light shadow mapping for many lights. It does not scale in terms of performance nor memory usage. There are some interesting methods I saw discussed in reference material that I will add a link to which render and update shadow maps piece-wise, but they also need compute shaders to work well. Basically for now you need to sacrifice point light shadows for all but a handful of point lights if you don’t want to kill performance. I set the limit to 10 but that’s just what we had from before where 10 was the maximum number of point lights before this PR.
* I added a couple of debug visualisations behind a shader def that were useful for seeing performance impact of light distribution - I should make the debug mode configurable without modifying the shader code. One mode shows the number of lights affecting each cluster by tinting toward red for few lights or green for many lights (maxes out at 16, but not sure that’s a reasonable max). The other shows which cluster the surface at a fragment belongs to by tinting it with a randomish colour. This can help to understand deeper performance issues due to screen space tiles spanning multiple clusters in depth with divergent shader execution times.
Also, there are more things that could be done as improvements, and I will document those somewhere (I'm not sure where will be the best place... in a todo alongside the code, a GitHub issue, somewhere else?) but I think it works well enough and brings significant performance and scalability benefits that it's worth integrating already now and then iterating on.
* Calculate the light’s effective range based on its intensity and physical falloff and either just use this, or take the minimum of the user-supplied range and this. This would avoid unnecessary lighting calculations for clusters that cannot be affected. This would need to take into account HDR tone mapping as in my not-fully-understanding-the-details understanding, the threshold is relative to how bright the scene is.
* Improve the z-slicing to use a larger first slice.
* More gracefully handle the cluster light list uniform buffer binding size limitations by prioritising which lights are included (some heuristic for most significant like closest to the camera, brightest, affecting the most pixels, …)
* Switch to using a texture instead of uniform buffer
* Figure out the / a better story for shadows
I will also probably add an example that demonstrates some of the issues:
* What situations exhaust the space available in the uniform buffers
* Light range too large making lights affect many clusters and so exhausting the space for the lists of lights that affect clusters
* Light range set to be too small producing visible artifacts where clusters the light would physically affect are not affected by the light
* Perhaps some performance issues
* How many lights can be closely packed or affect large portions of the view before performance drops?
2021-12-09 03:08:54 +00:00
|
|
|
},
|
|
|
|
count: None,
|
|
|
|
},
|
2023-01-19 22:11:13 +00:00
|
|
|
// Globals
|
2022-09-28 04:20:27 +00:00
|
|
|
BindGroupLayoutEntry {
|
|
|
|
binding: 9,
|
|
|
|
visibility: ShaderStages::VERTEX_FRAGMENT,
|
|
|
|
ty: BindingType::Buffer {
|
|
|
|
ty: BufferBindingType::Uniform,
|
|
|
|
has_dynamic_offset: false,
|
|
|
|
min_binding_size: Some(GlobalsUniform::min_size()),
|
|
|
|
},
|
|
|
|
count: None,
|
|
|
|
},
|
Add Distance and Atmospheric Fog support (#6412)
<img width="1392" alt="image" src="https://user-images.githubusercontent.com/418473/203873533-44c029af-13b7-4740-8ea3-af96bd5867c9.png">
<img width="1392" alt="image" src="https://user-images.githubusercontent.com/418473/203873549-36be7a23-b341-42a2-8a9f-ceea8ac7a2b8.png">
# Objective
- Add support for the “classic” distance fog effect, as well as a more advanced atmospheric fog effect.
## Solution
This PR:
- Introduces a new `FogSettings` component that controls distance fog per-camera.
- Adds support for three widely used “traditional” fog falloff modes: `Linear`, `Exponential` and `ExponentialSquared`, as well as a more advanced `Atmospheric` fog;
- Adds support for directional light influence over fog color;
- Extracts fog via `ExtractComponent`, then uses a prepare system that sets up a new dynamic uniform struct (`Fog`), similar to other mesh view types;
- Renders fog in PBR material shader, as a final adjustment to the `output_color`, after PBR is computed (but before tone mapping);
- Adds a new `StandardMaterial` flag to enable fog; (`fog_enabled`)
- Adds convenience methods for easier artistic control when creating non-linear fog types;
- Adds documentation around fog.
---
## Changelog
### Added
- Added support for distance-based fog effects for PBR materials, controllable per-camera via the new `FogSettings` component;
- Added `FogFalloff` enum for selecting between three widely used “traditional” fog falloff modes: `Linear`, `Exponential` and `ExponentialSquared`, as well as a more advanced `Atmospheric` fog;
2023-01-29 15:28:56 +00:00
|
|
|
// Fog
|
|
|
|
BindGroupLayoutEntry {
|
|
|
|
binding: 10,
|
|
|
|
visibility: ShaderStages::FRAGMENT,
|
|
|
|
ty: BindingType::Buffer {
|
|
|
|
ty: BufferBindingType::Uniform,
|
|
|
|
has_dynamic_offset: true,
|
|
|
|
min_binding_size: Some(GpuFog::min_size()),
|
|
|
|
},
|
|
|
|
count: None,
|
|
|
|
},
|
Screen Space Ambient Occlusion (SSAO) MVP (#7402)
![image](https://github.com/bevyengine/bevy/assets/47158642/dbb62645-f639-4f2b-b84b-26fd915c186d)
# Objective
- Add Screen space ambient occlusion (SSAO). SSAO approximates
small-scale, local occlusion of _indirect_ diffuse light between
objects. SSAO does not apply to direct lighting, such as point or
directional lights.
- This darkens creases, e.g. on staircases, and gives nice contact
shadows where objects meet, giving entities a more "grounded" feel.
- Closes https://github.com/bevyengine/bevy/issues/3632.
## Solution
- Implement the GTAO algorithm.
-
https://www.activision.com/cdn/research/Practical_Real_Time_Strategies_for_Accurate_Indirect_Occlusion_NEW%20VERSION_COLOR.pdf
-
https://blog.selfshadow.com/publications/s2016-shading-course/activision/s2016_pbs_activision_occlusion.pdf
- Source code heavily based on [Intel's
XeGTAO](https://github.com/GameTechDev/XeGTAO/blob/0d177ce06bfa642f64d8af4de1197ad1bcb862d4/Source/Rendering/Shaders/XeGTAO.hlsli).
- Add an SSAO bevy example.
## Algorithm Overview
* Run a depth and normal prepass
* Create downscaled mips of the depth texture (preprocess_depths pass)
* GTAO pass - for each pixel, take several random samples from the
depth+normal buffers, reconstruct world position, raytrace in screen
space to estimate occlusion. Rather then doing completely random samples
on a hemisphere, you choose random _slices_ of the hemisphere, and then
can analytically compute the full occlusion of that slice. Also compute
edges based on depth differences here.
* Spatial denoise pass - bilateral blur, using edge detection to not
blur over edges. This is the final SSAO result.
* Main pass - if SSAO exists, sample the SSAO texture, and set occlusion
to be the minimum of ssao/material occlusion. This then feeds into the
rest of the PBR shader as normal.
---
## Future Improvements
- Maybe remove the low quality preset for now (too noisy)
- WebGPU fallback (see below)
- Faster depth->world position (see reverted code)
- Bent normals
- Try interleaved gradient noise or spatiotemporal blue noise
- Replace the spatial denoiser with a combined spatial+temporal denoiser
- Render at half resolution and use a bilateral upsample
- Better multibounce approximation
(https://drive.google.com/file/d/1SyagcEVplIm2KkRD3WQYSO9O0Iyi1hfy/view)
## Far-Future Performance Improvements
- F16 math (missing naga-wgsl support
https://github.com/gfx-rs/naga/issues/1884)
- Faster coordinate space conversion for normals
- Faster depth mipchain creation
(https://github.com/GPUOpen-Effects/FidelityFX-SPD) (wgpu/naga does not
currently support subgroup ops)
- Deinterleaved SSAO for better cache efficiency
(https://developer.nvidia.com/sites/default/files/akamai/gameworks/samples/DeinterleavedTexturing.pdf)
## Other Interesting Papers
- Visibility bitmask
(https://link.springer.com/article/10.1007/s00371-022-02703-y,
https://cdrinmatane.github.io/posts/cgspotlight-slides/)
- Screen space diffuse lighting
(https://github.com/Patapom/GodComplex/blob/master/Tests/TestHBIL/2018%20Mayaux%20-%20Horizon-Based%20Indirect%20Lighting%20(HBIL).pdf)
## Platform Support
* SSAO currently does not work on DirectX12 due to issues with wgpu and
naga:
* https://github.com/gfx-rs/wgpu/pull/3798
* https://github.com/gfx-rs/naga/pull/2353
* SSAO currently does not work on WebGPU because r16float is not a valid
storage texture format
https://gpuweb.github.io/gpuweb/wgsl/#storage-texel-formats. We can fix
this with a fallback to r32float.
---
## Changelog
- Added ScreenSpaceAmbientOcclusionSettings,
ScreenSpaceAmbientOcclusionQualityLevel, and
ScreenSpaceAmbientOcclusionBundle
---------
Co-authored-by: IceSentry <c.giguere42@gmail.com>
Co-authored-by: IceSentry <IceSentry@users.noreply.github.com>
Co-authored-by: Daniel Chia <danstryder@gmail.com>
Co-authored-by: Elabajaba <Elabajaba@users.noreply.github.com>
Co-authored-by: Robert Swain <robert.swain@gmail.com>
Co-authored-by: robtfm <50659922+robtfm@users.noreply.github.com>
Co-authored-by: Brandon Dyer <brandondyer64@gmail.com>
Co-authored-by: Edgar Geier <geieredgar@gmail.com>
Co-authored-by: Nicola Papale <nicopap@users.noreply.github.com>
Co-authored-by: Carter Anderson <mcanders1@gmail.com>
2023-06-18 21:05:55 +00:00
|
|
|
// Screen space ambient occlusion texture
|
|
|
|
BindGroupLayoutEntry {
|
|
|
|
binding: 11,
|
|
|
|
visibility: ShaderStages::FRAGMENT,
|
|
|
|
ty: BindingType::Texture {
|
|
|
|
multisampled: false,
|
|
|
|
sample_type: TextureSampleType::Float { filterable: false },
|
|
|
|
view_dimension: TextureViewDimension::D2,
|
|
|
|
},
|
|
|
|
count: None,
|
|
|
|
},
|
2023-01-19 22:11:13 +00:00
|
|
|
];
|
EnvironmentMapLight, BRDF Improvements (#7051)
(Before)
![image](https://user-images.githubusercontent.com/47158642/213946111-15ec758f-1f1d-443c-b196-1fdcd4ae49da.png)
(After)
![image](https://user-images.githubusercontent.com/47158642/217051179-67381e73-dd44-461b-a2c7-87b0440ef8de.png)
![image](https://user-images.githubusercontent.com/47158642/212492404-524e4ad3-7837-4ed4-8b20-2abc276aa8e8.png)
# Objective
- Improve lighting; especially reflections.
- Closes https://github.com/bevyengine/bevy/issues/4581.
## Solution
- Implement environment maps, providing better ambient light.
- Add microfacet multibounce approximation for specular highlights from Filament.
- Occlusion is no longer incorrectly applied to direct lighting. It now only applies to diffuse indirect light. Unsure if it's also supposed to apply to specular indirect light - the glTF specification just says "indirect light". In the case of ambient occlusion, for instance, that's usually only calculated as diffuse though. For now, I'm choosing to apply this just to indirect diffuse light, and not specular.
- Modified the PBR example to use an environment map, and have labels.
- Added `FallbackImageCubemap`.
## Implementation
- IBL technique references can be found in environment_map.wgsl.
- It's more accurate to use a LUT for the scale/bias. Filament has a good reference on generating this LUT. For now, I just used an analytic approximation.
- For now, environment maps must first be prefiltered outside of bevy using a 3rd party tool. See the `EnvironmentMap` documentation.
- Eventually, we should have our own prefiltering code, so that we can have dynamically changing environment maps, as well as let users drop in an HDR image and use asset preprocessing to create the needed textures using only bevy.
---
## Changelog
- Added an `EnvironmentMapLight` camera component that adds additional ambient light to a scene.
- StandardMaterials will now appear brighter and more saturated at high roughness, due to internal material changes. This is more physically correct.
- Fixed StandardMaterial occlusion being incorrectly applied to direct lighting.
- Added `FallbackImageCubemap`.
Co-authored-by: IceSentry <c.giguere42@gmail.com>
Co-authored-by: James Liu <contact@jamessliu.com>
Co-authored-by: Rob Parrett <robparrett@gmail.com>
2023-02-09 16:46:32 +00:00
|
|
|
|
|
|
|
// EnvironmentMapLight
|
|
|
|
let environment_map_entries =
|
Screen Space Ambient Occlusion (SSAO) MVP (#7402)
![image](https://github.com/bevyengine/bevy/assets/47158642/dbb62645-f639-4f2b-b84b-26fd915c186d)
# Objective
- Add Screen space ambient occlusion (SSAO). SSAO approximates
small-scale, local occlusion of _indirect_ diffuse light between
objects. SSAO does not apply to direct lighting, such as point or
directional lights.
- This darkens creases, e.g. on staircases, and gives nice contact
shadows where objects meet, giving entities a more "grounded" feel.
- Closes https://github.com/bevyengine/bevy/issues/3632.
## Solution
- Implement the GTAO algorithm.
-
https://www.activision.com/cdn/research/Practical_Real_Time_Strategies_for_Accurate_Indirect_Occlusion_NEW%20VERSION_COLOR.pdf
-
https://blog.selfshadow.com/publications/s2016-shading-course/activision/s2016_pbs_activision_occlusion.pdf
- Source code heavily based on [Intel's
XeGTAO](https://github.com/GameTechDev/XeGTAO/blob/0d177ce06bfa642f64d8af4de1197ad1bcb862d4/Source/Rendering/Shaders/XeGTAO.hlsli).
- Add an SSAO bevy example.
## Algorithm Overview
* Run a depth and normal prepass
* Create downscaled mips of the depth texture (preprocess_depths pass)
* GTAO pass - for each pixel, take several random samples from the
depth+normal buffers, reconstruct world position, raytrace in screen
space to estimate occlusion. Rather then doing completely random samples
on a hemisphere, you choose random _slices_ of the hemisphere, and then
can analytically compute the full occlusion of that slice. Also compute
edges based on depth differences here.
* Spatial denoise pass - bilateral blur, using edge detection to not
blur over edges. This is the final SSAO result.
* Main pass - if SSAO exists, sample the SSAO texture, and set occlusion
to be the minimum of ssao/material occlusion. This then feeds into the
rest of the PBR shader as normal.
---
## Future Improvements
- Maybe remove the low quality preset for now (too noisy)
- WebGPU fallback (see below)
- Faster depth->world position (see reverted code)
- Bent normals
- Try interleaved gradient noise or spatiotemporal blue noise
- Replace the spatial denoiser with a combined spatial+temporal denoiser
- Render at half resolution and use a bilateral upsample
- Better multibounce approximation
(https://drive.google.com/file/d/1SyagcEVplIm2KkRD3WQYSO9O0Iyi1hfy/view)
## Far-Future Performance Improvements
- F16 math (missing naga-wgsl support
https://github.com/gfx-rs/naga/issues/1884)
- Faster coordinate space conversion for normals
- Faster depth mipchain creation
(https://github.com/GPUOpen-Effects/FidelityFX-SPD) (wgpu/naga does not
currently support subgroup ops)
- Deinterleaved SSAO for better cache efficiency
(https://developer.nvidia.com/sites/default/files/akamai/gameworks/samples/DeinterleavedTexturing.pdf)
## Other Interesting Papers
- Visibility bitmask
(https://link.springer.com/article/10.1007/s00371-022-02703-y,
https://cdrinmatane.github.io/posts/cgspotlight-slides/)
- Screen space diffuse lighting
(https://github.com/Patapom/GodComplex/blob/master/Tests/TestHBIL/2018%20Mayaux%20-%20Horizon-Based%20Indirect%20Lighting%20(HBIL).pdf)
## Platform Support
* SSAO currently does not work on DirectX12 due to issues with wgpu and
naga:
* https://github.com/gfx-rs/wgpu/pull/3798
* https://github.com/gfx-rs/naga/pull/2353
* SSAO currently does not work on WebGPU because r16float is not a valid
storage texture format
https://gpuweb.github.io/gpuweb/wgsl/#storage-texel-formats. We can fix
this with a fallback to r32float.
---
## Changelog
- Added ScreenSpaceAmbientOcclusionSettings,
ScreenSpaceAmbientOcclusionQualityLevel, and
ScreenSpaceAmbientOcclusionBundle
---------
Co-authored-by: IceSentry <c.giguere42@gmail.com>
Co-authored-by: IceSentry <IceSentry@users.noreply.github.com>
Co-authored-by: Daniel Chia <danstryder@gmail.com>
Co-authored-by: Elabajaba <Elabajaba@users.noreply.github.com>
Co-authored-by: Robert Swain <robert.swain@gmail.com>
Co-authored-by: robtfm <50659922+robtfm@users.noreply.github.com>
Co-authored-by: Brandon Dyer <brandondyer64@gmail.com>
Co-authored-by: Edgar Geier <geieredgar@gmail.com>
Co-authored-by: Nicola Papale <nicopap@users.noreply.github.com>
Co-authored-by: Carter Anderson <mcanders1@gmail.com>
2023-06-18 21:05:55 +00:00
|
|
|
environment_map::get_bind_group_layout_entries([12, 13, 14]);
|
EnvironmentMapLight, BRDF Improvements (#7051)
(Before)
![image](https://user-images.githubusercontent.com/47158642/213946111-15ec758f-1f1d-443c-b196-1fdcd4ae49da.png)
(After)
![image](https://user-images.githubusercontent.com/47158642/217051179-67381e73-dd44-461b-a2c7-87b0440ef8de.png)
![image](https://user-images.githubusercontent.com/47158642/212492404-524e4ad3-7837-4ed4-8b20-2abc276aa8e8.png)
# Objective
- Improve lighting; especially reflections.
- Closes https://github.com/bevyengine/bevy/issues/4581.
## Solution
- Implement environment maps, providing better ambient light.
- Add microfacet multibounce approximation for specular highlights from Filament.
- Occlusion is no longer incorrectly applied to direct lighting. It now only applies to diffuse indirect light. Unsure if it's also supposed to apply to specular indirect light - the glTF specification just says "indirect light". In the case of ambient occlusion, for instance, that's usually only calculated as diffuse though. For now, I'm choosing to apply this just to indirect diffuse light, and not specular.
- Modified the PBR example to use an environment map, and have labels.
- Added `FallbackImageCubemap`.
## Implementation
- IBL technique references can be found in environment_map.wgsl.
- It's more accurate to use a LUT for the scale/bias. Filament has a good reference on generating this LUT. For now, I just used an analytic approximation.
- For now, environment maps must first be prefiltered outside of bevy using a 3rd party tool. See the `EnvironmentMap` documentation.
- Eventually, we should have our own prefiltering code, so that we can have dynamically changing environment maps, as well as let users drop in an HDR image and use asset preprocessing to create the needed textures using only bevy.
---
## Changelog
- Added an `EnvironmentMapLight` camera component that adds additional ambient light to a scene.
- StandardMaterials will now appear brighter and more saturated at high roughness, due to internal material changes. This is more physically correct.
- Fixed StandardMaterial occlusion being incorrectly applied to direct lighting.
- Added `FallbackImageCubemap`.
Co-authored-by: IceSentry <c.giguere42@gmail.com>
Co-authored-by: James Liu <contact@jamessliu.com>
Co-authored-by: Rob Parrett <robparrett@gmail.com>
2023-02-09 16:46:32 +00:00
|
|
|
entries.extend_from_slice(&environment_map_entries);
|
|
|
|
|
2023-02-19 20:38:13 +00:00
|
|
|
// Tonemapping
|
Screen Space Ambient Occlusion (SSAO) MVP (#7402)
![image](https://github.com/bevyengine/bevy/assets/47158642/dbb62645-f639-4f2b-b84b-26fd915c186d)
# Objective
- Add Screen space ambient occlusion (SSAO). SSAO approximates
small-scale, local occlusion of _indirect_ diffuse light between
objects. SSAO does not apply to direct lighting, such as point or
directional lights.
- This darkens creases, e.g. on staircases, and gives nice contact
shadows where objects meet, giving entities a more "grounded" feel.
- Closes https://github.com/bevyengine/bevy/issues/3632.
## Solution
- Implement the GTAO algorithm.
-
https://www.activision.com/cdn/research/Practical_Real_Time_Strategies_for_Accurate_Indirect_Occlusion_NEW%20VERSION_COLOR.pdf
-
https://blog.selfshadow.com/publications/s2016-shading-course/activision/s2016_pbs_activision_occlusion.pdf
- Source code heavily based on [Intel's
XeGTAO](https://github.com/GameTechDev/XeGTAO/blob/0d177ce06bfa642f64d8af4de1197ad1bcb862d4/Source/Rendering/Shaders/XeGTAO.hlsli).
- Add an SSAO bevy example.
## Algorithm Overview
* Run a depth and normal prepass
* Create downscaled mips of the depth texture (preprocess_depths pass)
* GTAO pass - for each pixel, take several random samples from the
depth+normal buffers, reconstruct world position, raytrace in screen
space to estimate occlusion. Rather then doing completely random samples
on a hemisphere, you choose random _slices_ of the hemisphere, and then
can analytically compute the full occlusion of that slice. Also compute
edges based on depth differences here.
* Spatial denoise pass - bilateral blur, using edge detection to not
blur over edges. This is the final SSAO result.
* Main pass - if SSAO exists, sample the SSAO texture, and set occlusion
to be the minimum of ssao/material occlusion. This then feeds into the
rest of the PBR shader as normal.
---
## Future Improvements
- Maybe remove the low quality preset for now (too noisy)
- WebGPU fallback (see below)
- Faster depth->world position (see reverted code)
- Bent normals
- Try interleaved gradient noise or spatiotemporal blue noise
- Replace the spatial denoiser with a combined spatial+temporal denoiser
- Render at half resolution and use a bilateral upsample
- Better multibounce approximation
(https://drive.google.com/file/d/1SyagcEVplIm2KkRD3WQYSO9O0Iyi1hfy/view)
## Far-Future Performance Improvements
- F16 math (missing naga-wgsl support
https://github.com/gfx-rs/naga/issues/1884)
- Faster coordinate space conversion for normals
- Faster depth mipchain creation
(https://github.com/GPUOpen-Effects/FidelityFX-SPD) (wgpu/naga does not
currently support subgroup ops)
- Deinterleaved SSAO for better cache efficiency
(https://developer.nvidia.com/sites/default/files/akamai/gameworks/samples/DeinterleavedTexturing.pdf)
## Other Interesting Papers
- Visibility bitmask
(https://link.springer.com/article/10.1007/s00371-022-02703-y,
https://cdrinmatane.github.io/posts/cgspotlight-slides/)
- Screen space diffuse lighting
(https://github.com/Patapom/GodComplex/blob/master/Tests/TestHBIL/2018%20Mayaux%20-%20Horizon-Based%20Indirect%20Lighting%20(HBIL).pdf)
## Platform Support
* SSAO currently does not work on DirectX12 due to issues with wgpu and
naga:
* https://github.com/gfx-rs/wgpu/pull/3798
* https://github.com/gfx-rs/naga/pull/2353
* SSAO currently does not work on WebGPU because r16float is not a valid
storage texture format
https://gpuweb.github.io/gpuweb/wgsl/#storage-texel-formats. We can fix
this with a fallback to r32float.
---
## Changelog
- Added ScreenSpaceAmbientOcclusionSettings,
ScreenSpaceAmbientOcclusionQualityLevel, and
ScreenSpaceAmbientOcclusionBundle
---------
Co-authored-by: IceSentry <c.giguere42@gmail.com>
Co-authored-by: IceSentry <IceSentry@users.noreply.github.com>
Co-authored-by: Daniel Chia <danstryder@gmail.com>
Co-authored-by: Elabajaba <Elabajaba@users.noreply.github.com>
Co-authored-by: Robert Swain <robert.swain@gmail.com>
Co-authored-by: robtfm <50659922+robtfm@users.noreply.github.com>
Co-authored-by: Brandon Dyer <brandondyer64@gmail.com>
Co-authored-by: Edgar Geier <geieredgar@gmail.com>
Co-authored-by: Nicola Papale <nicopap@users.noreply.github.com>
Co-authored-by: Carter Anderson <mcanders1@gmail.com>
2023-06-18 21:05:55 +00:00
|
|
|
let tonemapping_lut_entries = get_lut_bind_group_layout_entries([15, 16]);
|
2023-02-19 20:38:13 +00:00
|
|
|
entries.extend_from_slice(&tonemapping_lut_entries);
|
|
|
|
|
Webgpu support (#8336)
# Objective
- Support WebGPU
- alternative to #5027 that doesn't need any async / await
- fixes #8315
- Surprise fix #7318
## Solution
### For async renderer initialisation
- Update the plugin lifecycle:
- app builds the plugin
- calls `plugin.build`
- registers the plugin
- app starts the event loop
- event loop waits for `ready` of all registered plugins in the same
order
- returns `true` by default
- then call all `finish` then all `cleanup` in the same order as
registered
- then execute the schedule
In the case of the renderer, to avoid anything async:
- building the renderer plugin creates a detached task that will send
back the initialised renderer through a mutex in a resource
- `ready` will wait for the renderer to be present in the resource
- `finish` will take that renderer and place it in the expected
resources by other plugins
- other plugins (that expect the renderer to be available) `finish` are
called and they are able to set up their pipelines
- `cleanup` is called, only custom one is still for pipeline rendering
### For WebGPU support
- update the `build-wasm-example` script to support passing `--api
webgpu` that will build the example with WebGPU support
- feature for webgl2 was always enabled when building for wasm. it's now
in the default feature list and enabled on all platforms, so check for
this feature must also check that the target_arch is `wasm32`
---
## Migration Guide
- `Plugin::setup` has been renamed `Plugin::cleanup`
- `Plugin::finish` has been added, and plugins adding pipelines should
do it in this function instead of `Plugin::build`
```rust
// Before
impl Plugin for MyPlugin {
fn build(&self, app: &mut App) {
app.insert_resource::<MyResource>
.add_systems(Update, my_system);
let render_app = match app.get_sub_app_mut(RenderApp) {
Ok(render_app) => render_app,
Err(_) => return,
};
render_app
.init_resource::<RenderResourceNeedingDevice>()
.init_resource::<OtherRenderResource>();
}
}
// After
impl Plugin for MyPlugin {
fn build(&self, app: &mut App) {
app.insert_resource::<MyResource>
.add_systems(Update, my_system);
let render_app = match app.get_sub_app_mut(RenderApp) {
Ok(render_app) => render_app,
Err(_) => return,
};
render_app
.init_resource::<OtherRenderResource>();
}
fn finish(&self, app: &mut App) {
let render_app = match app.get_sub_app_mut(RenderApp) {
Ok(render_app) => render_app,
Err(_) => return,
};
render_app
.init_resource::<RenderResourceNeedingDevice>();
}
}
```
2023-05-04 22:07:57 +00:00
|
|
|
if cfg!(any(not(feature = "webgl"), not(target_arch = "wasm32")))
|
|
|
|
|| (cfg!(all(feature = "webgl", target_arch = "wasm32")) && !multisampled)
|
|
|
|
{
|
2023-03-02 02:23:06 +00:00
|
|
|
entries.extend_from_slice(&prepass::get_bind_group_layout_entries(
|
Screen Space Ambient Occlusion (SSAO) MVP (#7402)
![image](https://github.com/bevyengine/bevy/assets/47158642/dbb62645-f639-4f2b-b84b-26fd915c186d)
# Objective
- Add Screen space ambient occlusion (SSAO). SSAO approximates
small-scale, local occlusion of _indirect_ diffuse light between
objects. SSAO does not apply to direct lighting, such as point or
directional lights.
- This darkens creases, e.g. on staircases, and gives nice contact
shadows where objects meet, giving entities a more "grounded" feel.
- Closes https://github.com/bevyengine/bevy/issues/3632.
## Solution
- Implement the GTAO algorithm.
-
https://www.activision.com/cdn/research/Practical_Real_Time_Strategies_for_Accurate_Indirect_Occlusion_NEW%20VERSION_COLOR.pdf
-
https://blog.selfshadow.com/publications/s2016-shading-course/activision/s2016_pbs_activision_occlusion.pdf
- Source code heavily based on [Intel's
XeGTAO](https://github.com/GameTechDev/XeGTAO/blob/0d177ce06bfa642f64d8af4de1197ad1bcb862d4/Source/Rendering/Shaders/XeGTAO.hlsli).
- Add an SSAO bevy example.
## Algorithm Overview
* Run a depth and normal prepass
* Create downscaled mips of the depth texture (preprocess_depths pass)
* GTAO pass - for each pixel, take several random samples from the
depth+normal buffers, reconstruct world position, raytrace in screen
space to estimate occlusion. Rather then doing completely random samples
on a hemisphere, you choose random _slices_ of the hemisphere, and then
can analytically compute the full occlusion of that slice. Also compute
edges based on depth differences here.
* Spatial denoise pass - bilateral blur, using edge detection to not
blur over edges. This is the final SSAO result.
* Main pass - if SSAO exists, sample the SSAO texture, and set occlusion
to be the minimum of ssao/material occlusion. This then feeds into the
rest of the PBR shader as normal.
---
## Future Improvements
- Maybe remove the low quality preset for now (too noisy)
- WebGPU fallback (see below)
- Faster depth->world position (see reverted code)
- Bent normals
- Try interleaved gradient noise or spatiotemporal blue noise
- Replace the spatial denoiser with a combined spatial+temporal denoiser
- Render at half resolution and use a bilateral upsample
- Better multibounce approximation
(https://drive.google.com/file/d/1SyagcEVplIm2KkRD3WQYSO9O0Iyi1hfy/view)
## Far-Future Performance Improvements
- F16 math (missing naga-wgsl support
https://github.com/gfx-rs/naga/issues/1884)
- Faster coordinate space conversion for normals
- Faster depth mipchain creation
(https://github.com/GPUOpen-Effects/FidelityFX-SPD) (wgpu/naga does not
currently support subgroup ops)
- Deinterleaved SSAO for better cache efficiency
(https://developer.nvidia.com/sites/default/files/akamai/gameworks/samples/DeinterleavedTexturing.pdf)
## Other Interesting Papers
- Visibility bitmask
(https://link.springer.com/article/10.1007/s00371-022-02703-y,
https://cdrinmatane.github.io/posts/cgspotlight-slides/)
- Screen space diffuse lighting
(https://github.com/Patapom/GodComplex/blob/master/Tests/TestHBIL/2018%20Mayaux%20-%20Horizon-Based%20Indirect%20Lighting%20(HBIL).pdf)
## Platform Support
* SSAO currently does not work on DirectX12 due to issues with wgpu and
naga:
* https://github.com/gfx-rs/wgpu/pull/3798
* https://github.com/gfx-rs/naga/pull/2353
* SSAO currently does not work on WebGPU because r16float is not a valid
storage texture format
https://gpuweb.github.io/gpuweb/wgsl/#storage-texel-formats. We can fix
this with a fallback to r32float.
---
## Changelog
- Added ScreenSpaceAmbientOcclusionSettings,
ScreenSpaceAmbientOcclusionQualityLevel, and
ScreenSpaceAmbientOcclusionBundle
---------
Co-authored-by: IceSentry <c.giguere42@gmail.com>
Co-authored-by: IceSentry <IceSentry@users.noreply.github.com>
Co-authored-by: Daniel Chia <danstryder@gmail.com>
Co-authored-by: Elabajaba <Elabajaba@users.noreply.github.com>
Co-authored-by: Robert Swain <robert.swain@gmail.com>
Co-authored-by: robtfm <50659922+robtfm@users.noreply.github.com>
Co-authored-by: Brandon Dyer <brandondyer64@gmail.com>
Co-authored-by: Edgar Geier <geieredgar@gmail.com>
Co-authored-by: Nicola Papale <nicopap@users.noreply.github.com>
Co-authored-by: Carter Anderson <mcanders1@gmail.com>
2023-06-18 21:05:55 +00:00
|
|
|
[17, 18, 19],
|
2023-03-02 02:23:06 +00:00
|
|
|
multisampled,
|
|
|
|
));
|
2023-01-19 22:11:13 +00:00
|
|
|
}
|
EnvironmentMapLight, BRDF Improvements (#7051)
(Before)
![image](https://user-images.githubusercontent.com/47158642/213946111-15ec758f-1f1d-443c-b196-1fdcd4ae49da.png)
(After)
![image](https://user-images.githubusercontent.com/47158642/217051179-67381e73-dd44-461b-a2c7-87b0440ef8de.png)
![image](https://user-images.githubusercontent.com/47158642/212492404-524e4ad3-7837-4ed4-8b20-2abc276aa8e8.png)
# Objective
- Improve lighting; especially reflections.
- Closes https://github.com/bevyengine/bevy/issues/4581.
## Solution
- Implement environment maps, providing better ambient light.
- Add microfacet multibounce approximation for specular highlights from Filament.
- Occlusion is no longer incorrectly applied to direct lighting. It now only applies to diffuse indirect light. Unsure if it's also supposed to apply to specular indirect light - the glTF specification just says "indirect light". In the case of ambient occlusion, for instance, that's usually only calculated as diffuse though. For now, I'm choosing to apply this just to indirect diffuse light, and not specular.
- Modified the PBR example to use an environment map, and have labels.
- Added `FallbackImageCubemap`.
## Implementation
- IBL technique references can be found in environment_map.wgsl.
- It's more accurate to use a LUT for the scale/bias. Filament has a good reference on generating this LUT. For now, I just used an analytic approximation.
- For now, environment maps must first be prefiltered outside of bevy using a 3rd party tool. See the `EnvironmentMap` documentation.
- Eventually, we should have our own prefiltering code, so that we can have dynamically changing environment maps, as well as let users drop in an HDR image and use asset preprocessing to create the needed textures using only bevy.
---
## Changelog
- Added an `EnvironmentMapLight` camera component that adds additional ambient light to a scene.
- StandardMaterials will now appear brighter and more saturated at high roughness, due to internal material changes. This is more physically correct.
- Fixed StandardMaterial occlusion being incorrectly applied to direct lighting.
- Added `FallbackImageCubemap`.
Co-authored-by: IceSentry <c.giguere42@gmail.com>
Co-authored-by: James Liu <contact@jamessliu.com>
Co-authored-by: Rob Parrett <robparrett@gmail.com>
2023-02-09 16:46:32 +00:00
|
|
|
|
2023-01-19 22:11:13 +00:00
|
|
|
entries
|
|
|
|
}
|
|
|
|
|
|
|
|
let view_layout = render_device.create_bind_group_layout(&BindGroupLayoutDescriptor {
|
Shader Imports. Decouple Mesh logic from PBR (#3137)
## Shader Imports
This adds "whole file" shader imports. These come in two flavors:
### Asset Path Imports
```rust
// /assets/shaders/custom.wgsl
#import "shaders/custom_material.wgsl"
[[stage(fragment)]]
fn fragment() -> [[location(0)]] vec4<f32> {
return get_color();
}
```
```rust
// /assets/shaders/custom_material.wgsl
[[block]]
struct CustomMaterial {
color: vec4<f32>;
};
[[group(1), binding(0)]]
var<uniform> material: CustomMaterial;
```
### Custom Path Imports
Enables defining custom import paths. These are intended to be used by crates to export shader functionality:
```rust
// bevy_pbr2/src/render/pbr.wgsl
#import bevy_pbr::mesh_view_bind_group
#import bevy_pbr::mesh_bind_group
[[block]]
struct StandardMaterial {
base_color: vec4<f32>;
emissive: vec4<f32>;
perceptual_roughness: f32;
metallic: f32;
reflectance: f32;
flags: u32;
};
/* rest of PBR fragment shader here */
```
```rust
impl Plugin for MeshRenderPlugin {
fn build(&self, app: &mut bevy_app::App) {
let mut shaders = app.world.get_resource_mut::<Assets<Shader>>().unwrap();
shaders.set_untracked(
MESH_BIND_GROUP_HANDLE,
Shader::from_wgsl(include_str!("mesh_bind_group.wgsl"))
.with_import_path("bevy_pbr::mesh_bind_group"),
);
shaders.set_untracked(
MESH_VIEW_BIND_GROUP_HANDLE,
Shader::from_wgsl(include_str!("mesh_view_bind_group.wgsl"))
.with_import_path("bevy_pbr::mesh_view_bind_group"),
);
```
By convention these should use rust-style module paths that start with the crate name. Ultimately we might enforce this convention.
Note that this feature implements _run time_ import resolution. Ultimately we should move the import logic into an asset preprocessor once Bevy gets support for that.
## Decouple Mesh Logic from PBR Logic via MeshRenderPlugin
This breaks out mesh rendering code from PBR material code, which improves the legibility of the code, decouples mesh logic from PBR logic, and opens the door for a future `MaterialPlugin<T: Material>` that handles all of the pipeline setup for arbitrary shader materials.
## Removed `RenderAsset<Shader>` in favor of extracting shaders into RenderPipelineCache
This simplifies the shader import implementation and removes the need to pass around `RenderAssets<Shader>`.
## RenderCommands are now fallible
This allows us to cleanly handle pipelines+shaders not being ready yet. We can abort a render command early in these cases, preventing bevy from trying to bind group / do draw calls for pipelines that couldn't be bound. This could also be used in the future for things like "components not existing on entities yet".
# Next Steps
* Investigate using Naga for "partial typed imports" (ex: `#import bevy_pbr::material::StandardMaterial`, which would import only the StandardMaterial struct)
* Implement `MaterialPlugin<T: Material>` for low-boilerplate custom material shaders
* Move shader import logic into the asset preprocessor once bevy gets support for that.
Fixes #3132
2021-11-18 03:45:02 +00:00
|
|
|
label: Some("mesh_view_layout"),
|
2023-01-19 22:11:13 +00:00
|
|
|
entries: &layout_entries(clustered_forward_buffer_binding_type, false),
|
Shader Imports. Decouple Mesh logic from PBR (#3137)
## Shader Imports
This adds "whole file" shader imports. These come in two flavors:
### Asset Path Imports
```rust
// /assets/shaders/custom.wgsl
#import "shaders/custom_material.wgsl"
[[stage(fragment)]]
fn fragment() -> [[location(0)]] vec4<f32> {
return get_color();
}
```
```rust
// /assets/shaders/custom_material.wgsl
[[block]]
struct CustomMaterial {
color: vec4<f32>;
};
[[group(1), binding(0)]]
var<uniform> material: CustomMaterial;
```
### Custom Path Imports
Enables defining custom import paths. These are intended to be used by crates to export shader functionality:
```rust
// bevy_pbr2/src/render/pbr.wgsl
#import bevy_pbr::mesh_view_bind_group
#import bevy_pbr::mesh_bind_group
[[block]]
struct StandardMaterial {
base_color: vec4<f32>;
emissive: vec4<f32>;
perceptual_roughness: f32;
metallic: f32;
reflectance: f32;
flags: u32;
};
/* rest of PBR fragment shader here */
```
```rust
impl Plugin for MeshRenderPlugin {
fn build(&self, app: &mut bevy_app::App) {
let mut shaders = app.world.get_resource_mut::<Assets<Shader>>().unwrap();
shaders.set_untracked(
MESH_BIND_GROUP_HANDLE,
Shader::from_wgsl(include_str!("mesh_bind_group.wgsl"))
.with_import_path("bevy_pbr::mesh_bind_group"),
);
shaders.set_untracked(
MESH_VIEW_BIND_GROUP_HANDLE,
Shader::from_wgsl(include_str!("mesh_view_bind_group.wgsl"))
.with_import_path("bevy_pbr::mesh_view_bind_group"),
);
```
By convention these should use rust-style module paths that start with the crate name. Ultimately we might enforce this convention.
Note that this feature implements _run time_ import resolution. Ultimately we should move the import logic into an asset preprocessor once Bevy gets support for that.
## Decouple Mesh Logic from PBR Logic via MeshRenderPlugin
This breaks out mesh rendering code from PBR material code, which improves the legibility of the code, decouples mesh logic from PBR logic, and opens the door for a future `MaterialPlugin<T: Material>` that handles all of the pipeline setup for arbitrary shader materials.
## Removed `RenderAsset<Shader>` in favor of extracting shaders into RenderPipelineCache
This simplifies the shader import implementation and removes the need to pass around `RenderAssets<Shader>`.
## RenderCommands are now fallible
This allows us to cleanly handle pipelines+shaders not being ready yet. We can abort a render command early in these cases, preventing bevy from trying to bind group / do draw calls for pipelines that couldn't be bound. This could also be used in the future for things like "components not existing on entities yet".
# Next Steps
* Investigate using Naga for "partial typed imports" (ex: `#import bevy_pbr::material::StandardMaterial`, which would import only the StandardMaterial struct)
* Implement `MaterialPlugin<T: Material>` for low-boilerplate custom material shaders
* Move shader import logic into the asset preprocessor once bevy gets support for that.
Fixes #3132
2021-11-18 03:45:02 +00:00
|
|
|
});
|
|
|
|
|
2023-01-19 22:11:13 +00:00
|
|
|
let view_layout_multisampled =
|
|
|
|
render_device.create_bind_group_layout(&BindGroupLayoutDescriptor {
|
|
|
|
label: Some("mesh_view_layout_multisampled"),
|
|
|
|
entries: &layout_entries(clustered_forward_buffer_binding_type, true),
|
|
|
|
});
|
|
|
|
|
Shader Imports. Decouple Mesh logic from PBR (#3137)
## Shader Imports
This adds "whole file" shader imports. These come in two flavors:
### Asset Path Imports
```rust
// /assets/shaders/custom.wgsl
#import "shaders/custom_material.wgsl"
[[stage(fragment)]]
fn fragment() -> [[location(0)]] vec4<f32> {
return get_color();
}
```
```rust
// /assets/shaders/custom_material.wgsl
[[block]]
struct CustomMaterial {
color: vec4<f32>;
};
[[group(1), binding(0)]]
var<uniform> material: CustomMaterial;
```
### Custom Path Imports
Enables defining custom import paths. These are intended to be used by crates to export shader functionality:
```rust
// bevy_pbr2/src/render/pbr.wgsl
#import bevy_pbr::mesh_view_bind_group
#import bevy_pbr::mesh_bind_group
[[block]]
struct StandardMaterial {
base_color: vec4<f32>;
emissive: vec4<f32>;
perceptual_roughness: f32;
metallic: f32;
reflectance: f32;
flags: u32;
};
/* rest of PBR fragment shader here */
```
```rust
impl Plugin for MeshRenderPlugin {
fn build(&self, app: &mut bevy_app::App) {
let mut shaders = app.world.get_resource_mut::<Assets<Shader>>().unwrap();
shaders.set_untracked(
MESH_BIND_GROUP_HANDLE,
Shader::from_wgsl(include_str!("mesh_bind_group.wgsl"))
.with_import_path("bevy_pbr::mesh_bind_group"),
);
shaders.set_untracked(
MESH_VIEW_BIND_GROUP_HANDLE,
Shader::from_wgsl(include_str!("mesh_view_bind_group.wgsl"))
.with_import_path("bevy_pbr::mesh_view_bind_group"),
);
```
By convention these should use rust-style module paths that start with the crate name. Ultimately we might enforce this convention.
Note that this feature implements _run time_ import resolution. Ultimately we should move the import logic into an asset preprocessor once Bevy gets support for that.
## Decouple Mesh Logic from PBR Logic via MeshRenderPlugin
This breaks out mesh rendering code from PBR material code, which improves the legibility of the code, decouples mesh logic from PBR logic, and opens the door for a future `MaterialPlugin<T: Material>` that handles all of the pipeline setup for arbitrary shader materials.
## Removed `RenderAsset<Shader>` in favor of extracting shaders into RenderPipelineCache
This simplifies the shader import implementation and removes the need to pass around `RenderAssets<Shader>`.
## RenderCommands are now fallible
This allows us to cleanly handle pipelines+shaders not being ready yet. We can abort a render command early in these cases, preventing bevy from trying to bind group / do draw calls for pipelines that couldn't be bound. This could also be used in the future for things like "components not existing on entities yet".
# Next Steps
* Investigate using Naga for "partial typed imports" (ex: `#import bevy_pbr::material::StandardMaterial`, which would import only the StandardMaterial struct)
* Implement `MaterialPlugin<T: Material>` for low-boilerplate custom material shaders
* Move shader import logic into the asset preprocessor once bevy gets support for that.
Fixes #3132
2021-11-18 03:45:02 +00:00
|
|
|
// A 1x1x1 'all 1.0' texture to use as a dummy texture to use in place of optional StandardMaterial textures
|
|
|
|
let dummy_white_gpu_image = {
|
2023-03-04 12:29:10 +00:00
|
|
|
let image = Image::default();
|
Shader Imports. Decouple Mesh logic from PBR (#3137)
## Shader Imports
This adds "whole file" shader imports. These come in two flavors:
### Asset Path Imports
```rust
// /assets/shaders/custom.wgsl
#import "shaders/custom_material.wgsl"
[[stage(fragment)]]
fn fragment() -> [[location(0)]] vec4<f32> {
return get_color();
}
```
```rust
// /assets/shaders/custom_material.wgsl
[[block]]
struct CustomMaterial {
color: vec4<f32>;
};
[[group(1), binding(0)]]
var<uniform> material: CustomMaterial;
```
### Custom Path Imports
Enables defining custom import paths. These are intended to be used by crates to export shader functionality:
```rust
// bevy_pbr2/src/render/pbr.wgsl
#import bevy_pbr::mesh_view_bind_group
#import bevy_pbr::mesh_bind_group
[[block]]
struct StandardMaterial {
base_color: vec4<f32>;
emissive: vec4<f32>;
perceptual_roughness: f32;
metallic: f32;
reflectance: f32;
flags: u32;
};
/* rest of PBR fragment shader here */
```
```rust
impl Plugin for MeshRenderPlugin {
fn build(&self, app: &mut bevy_app::App) {
let mut shaders = app.world.get_resource_mut::<Assets<Shader>>().unwrap();
shaders.set_untracked(
MESH_BIND_GROUP_HANDLE,
Shader::from_wgsl(include_str!("mesh_bind_group.wgsl"))
.with_import_path("bevy_pbr::mesh_bind_group"),
);
shaders.set_untracked(
MESH_VIEW_BIND_GROUP_HANDLE,
Shader::from_wgsl(include_str!("mesh_view_bind_group.wgsl"))
.with_import_path("bevy_pbr::mesh_view_bind_group"),
);
```
By convention these should use rust-style module paths that start with the crate name. Ultimately we might enforce this convention.
Note that this feature implements _run time_ import resolution. Ultimately we should move the import logic into an asset preprocessor once Bevy gets support for that.
## Decouple Mesh Logic from PBR Logic via MeshRenderPlugin
This breaks out mesh rendering code from PBR material code, which improves the legibility of the code, decouples mesh logic from PBR logic, and opens the door for a future `MaterialPlugin<T: Material>` that handles all of the pipeline setup for arbitrary shader materials.
## Removed `RenderAsset<Shader>` in favor of extracting shaders into RenderPipelineCache
This simplifies the shader import implementation and removes the need to pass around `RenderAssets<Shader>`.
## RenderCommands are now fallible
This allows us to cleanly handle pipelines+shaders not being ready yet. We can abort a render command early in these cases, preventing bevy from trying to bind group / do draw calls for pipelines that couldn't be bound. This could also be used in the future for things like "components not existing on entities yet".
# Next Steps
* Investigate using Naga for "partial typed imports" (ex: `#import bevy_pbr::material::StandardMaterial`, which would import only the StandardMaterial struct)
* Implement `MaterialPlugin<T: Material>` for low-boilerplate custom material shaders
* Move shader import logic into the asset preprocessor once bevy gets support for that.
Fixes #3132
2021-11-18 03:45:02 +00:00
|
|
|
let texture = render_device.create_texture(&image.texture_descriptor);
|
2022-06-11 09:13:37 +00:00
|
|
|
let sampler = match image.sampler_descriptor {
|
|
|
|
ImageSampler::Default => (**default_sampler).clone(),
|
|
|
|
ImageSampler::Descriptor(descriptor) => render_device.create_sampler(&descriptor),
|
|
|
|
};
|
Shader Imports. Decouple Mesh logic from PBR (#3137)
## Shader Imports
This adds "whole file" shader imports. These come in two flavors:
### Asset Path Imports
```rust
// /assets/shaders/custom.wgsl
#import "shaders/custom_material.wgsl"
[[stage(fragment)]]
fn fragment() -> [[location(0)]] vec4<f32> {
return get_color();
}
```
```rust
// /assets/shaders/custom_material.wgsl
[[block]]
struct CustomMaterial {
color: vec4<f32>;
};
[[group(1), binding(0)]]
var<uniform> material: CustomMaterial;
```
### Custom Path Imports
Enables defining custom import paths. These are intended to be used by crates to export shader functionality:
```rust
// bevy_pbr2/src/render/pbr.wgsl
#import bevy_pbr::mesh_view_bind_group
#import bevy_pbr::mesh_bind_group
[[block]]
struct StandardMaterial {
base_color: vec4<f32>;
emissive: vec4<f32>;
perceptual_roughness: f32;
metallic: f32;
reflectance: f32;
flags: u32;
};
/* rest of PBR fragment shader here */
```
```rust
impl Plugin for MeshRenderPlugin {
fn build(&self, app: &mut bevy_app::App) {
let mut shaders = app.world.get_resource_mut::<Assets<Shader>>().unwrap();
shaders.set_untracked(
MESH_BIND_GROUP_HANDLE,
Shader::from_wgsl(include_str!("mesh_bind_group.wgsl"))
.with_import_path("bevy_pbr::mesh_bind_group"),
);
shaders.set_untracked(
MESH_VIEW_BIND_GROUP_HANDLE,
Shader::from_wgsl(include_str!("mesh_view_bind_group.wgsl"))
.with_import_path("bevy_pbr::mesh_view_bind_group"),
);
```
By convention these should use rust-style module paths that start with the crate name. Ultimately we might enforce this convention.
Note that this feature implements _run time_ import resolution. Ultimately we should move the import logic into an asset preprocessor once Bevy gets support for that.
## Decouple Mesh Logic from PBR Logic via MeshRenderPlugin
This breaks out mesh rendering code from PBR material code, which improves the legibility of the code, decouples mesh logic from PBR logic, and opens the door for a future `MaterialPlugin<T: Material>` that handles all of the pipeline setup for arbitrary shader materials.
## Removed `RenderAsset<Shader>` in favor of extracting shaders into RenderPipelineCache
This simplifies the shader import implementation and removes the need to pass around `RenderAssets<Shader>`.
## RenderCommands are now fallible
This allows us to cleanly handle pipelines+shaders not being ready yet. We can abort a render command early in these cases, preventing bevy from trying to bind group / do draw calls for pipelines that couldn't be bound. This could also be used in the future for things like "components not existing on entities yet".
# Next Steps
* Investigate using Naga for "partial typed imports" (ex: `#import bevy_pbr::material::StandardMaterial`, which would import only the StandardMaterial struct)
* Implement `MaterialPlugin<T: Material>` for low-boilerplate custom material shaders
* Move shader import logic into the asset preprocessor once bevy gets support for that.
Fixes #3132
2021-11-18 03:45:02 +00:00
|
|
|
|
|
|
|
let format_size = image.texture_descriptor.format.pixel_size();
|
|
|
|
render_queue.write_texture(
|
|
|
|
ImageCopyTexture {
|
|
|
|
texture: &texture,
|
|
|
|
mip_level: 0,
|
|
|
|
origin: Origin3d::ZERO,
|
2021-12-25 21:45:43 +00:00
|
|
|
aspect: TextureAspect::All,
|
Shader Imports. Decouple Mesh logic from PBR (#3137)
## Shader Imports
This adds "whole file" shader imports. These come in two flavors:
### Asset Path Imports
```rust
// /assets/shaders/custom.wgsl
#import "shaders/custom_material.wgsl"
[[stage(fragment)]]
fn fragment() -> [[location(0)]] vec4<f32> {
return get_color();
}
```
```rust
// /assets/shaders/custom_material.wgsl
[[block]]
struct CustomMaterial {
color: vec4<f32>;
};
[[group(1), binding(0)]]
var<uniform> material: CustomMaterial;
```
### Custom Path Imports
Enables defining custom import paths. These are intended to be used by crates to export shader functionality:
```rust
// bevy_pbr2/src/render/pbr.wgsl
#import bevy_pbr::mesh_view_bind_group
#import bevy_pbr::mesh_bind_group
[[block]]
struct StandardMaterial {
base_color: vec4<f32>;
emissive: vec4<f32>;
perceptual_roughness: f32;
metallic: f32;
reflectance: f32;
flags: u32;
};
/* rest of PBR fragment shader here */
```
```rust
impl Plugin for MeshRenderPlugin {
fn build(&self, app: &mut bevy_app::App) {
let mut shaders = app.world.get_resource_mut::<Assets<Shader>>().unwrap();
shaders.set_untracked(
MESH_BIND_GROUP_HANDLE,
Shader::from_wgsl(include_str!("mesh_bind_group.wgsl"))
.with_import_path("bevy_pbr::mesh_bind_group"),
);
shaders.set_untracked(
MESH_VIEW_BIND_GROUP_HANDLE,
Shader::from_wgsl(include_str!("mesh_view_bind_group.wgsl"))
.with_import_path("bevy_pbr::mesh_view_bind_group"),
);
```
By convention these should use rust-style module paths that start with the crate name. Ultimately we might enforce this convention.
Note that this feature implements _run time_ import resolution. Ultimately we should move the import logic into an asset preprocessor once Bevy gets support for that.
## Decouple Mesh Logic from PBR Logic via MeshRenderPlugin
This breaks out mesh rendering code from PBR material code, which improves the legibility of the code, decouples mesh logic from PBR logic, and opens the door for a future `MaterialPlugin<T: Material>` that handles all of the pipeline setup for arbitrary shader materials.
## Removed `RenderAsset<Shader>` in favor of extracting shaders into RenderPipelineCache
This simplifies the shader import implementation and removes the need to pass around `RenderAssets<Shader>`.
## RenderCommands are now fallible
This allows us to cleanly handle pipelines+shaders not being ready yet. We can abort a render command early in these cases, preventing bevy from trying to bind group / do draw calls for pipelines that couldn't be bound. This could also be used in the future for things like "components not existing on entities yet".
# Next Steps
* Investigate using Naga for "partial typed imports" (ex: `#import bevy_pbr::material::StandardMaterial`, which would import only the StandardMaterial struct)
* Implement `MaterialPlugin<T: Material>` for low-boilerplate custom material shaders
* Move shader import logic into the asset preprocessor once bevy gets support for that.
Fixes #3132
2021-11-18 03:45:02 +00:00
|
|
|
},
|
|
|
|
&image.data,
|
|
|
|
ImageDataLayout {
|
|
|
|
offset: 0,
|
2023-04-26 15:34:23 +00:00
|
|
|
bytes_per_row: Some(image.texture_descriptor.size.width * format_size as u32),
|
Shader Imports. Decouple Mesh logic from PBR (#3137)
## Shader Imports
This adds "whole file" shader imports. These come in two flavors:
### Asset Path Imports
```rust
// /assets/shaders/custom.wgsl
#import "shaders/custom_material.wgsl"
[[stage(fragment)]]
fn fragment() -> [[location(0)]] vec4<f32> {
return get_color();
}
```
```rust
// /assets/shaders/custom_material.wgsl
[[block]]
struct CustomMaterial {
color: vec4<f32>;
};
[[group(1), binding(0)]]
var<uniform> material: CustomMaterial;
```
### Custom Path Imports
Enables defining custom import paths. These are intended to be used by crates to export shader functionality:
```rust
// bevy_pbr2/src/render/pbr.wgsl
#import bevy_pbr::mesh_view_bind_group
#import bevy_pbr::mesh_bind_group
[[block]]
struct StandardMaterial {
base_color: vec4<f32>;
emissive: vec4<f32>;
perceptual_roughness: f32;
metallic: f32;
reflectance: f32;
flags: u32;
};
/* rest of PBR fragment shader here */
```
```rust
impl Plugin for MeshRenderPlugin {
fn build(&self, app: &mut bevy_app::App) {
let mut shaders = app.world.get_resource_mut::<Assets<Shader>>().unwrap();
shaders.set_untracked(
MESH_BIND_GROUP_HANDLE,
Shader::from_wgsl(include_str!("mesh_bind_group.wgsl"))
.with_import_path("bevy_pbr::mesh_bind_group"),
);
shaders.set_untracked(
MESH_VIEW_BIND_GROUP_HANDLE,
Shader::from_wgsl(include_str!("mesh_view_bind_group.wgsl"))
.with_import_path("bevy_pbr::mesh_view_bind_group"),
);
```
By convention these should use rust-style module paths that start with the crate name. Ultimately we might enforce this convention.
Note that this feature implements _run time_ import resolution. Ultimately we should move the import logic into an asset preprocessor once Bevy gets support for that.
## Decouple Mesh Logic from PBR Logic via MeshRenderPlugin
This breaks out mesh rendering code from PBR material code, which improves the legibility of the code, decouples mesh logic from PBR logic, and opens the door for a future `MaterialPlugin<T: Material>` that handles all of the pipeline setup for arbitrary shader materials.
## Removed `RenderAsset<Shader>` in favor of extracting shaders into RenderPipelineCache
This simplifies the shader import implementation and removes the need to pass around `RenderAssets<Shader>`.
## RenderCommands are now fallible
This allows us to cleanly handle pipelines+shaders not being ready yet. We can abort a render command early in these cases, preventing bevy from trying to bind group / do draw calls for pipelines that couldn't be bound. This could also be used in the future for things like "components not existing on entities yet".
# Next Steps
* Investigate using Naga for "partial typed imports" (ex: `#import bevy_pbr::material::StandardMaterial`, which would import only the StandardMaterial struct)
* Implement `MaterialPlugin<T: Material>` for low-boilerplate custom material shaders
* Move shader import logic into the asset preprocessor once bevy gets support for that.
Fixes #3132
2021-11-18 03:45:02 +00:00
|
|
|
rows_per_image: None,
|
|
|
|
},
|
|
|
|
image.texture_descriptor.size,
|
|
|
|
);
|
|
|
|
|
|
|
|
let texture_view = texture.create_view(&TextureViewDescriptor::default());
|
|
|
|
GpuImage {
|
|
|
|
texture,
|
|
|
|
texture_view,
|
2022-03-15 22:26:46 +00:00
|
|
|
texture_format: image.texture_descriptor.format,
|
Shader Imports. Decouple Mesh logic from PBR (#3137)
## Shader Imports
This adds "whole file" shader imports. These come in two flavors:
### Asset Path Imports
```rust
// /assets/shaders/custom.wgsl
#import "shaders/custom_material.wgsl"
[[stage(fragment)]]
fn fragment() -> [[location(0)]] vec4<f32> {
return get_color();
}
```
```rust
// /assets/shaders/custom_material.wgsl
[[block]]
struct CustomMaterial {
color: vec4<f32>;
};
[[group(1), binding(0)]]
var<uniform> material: CustomMaterial;
```
### Custom Path Imports
Enables defining custom import paths. These are intended to be used by crates to export shader functionality:
```rust
// bevy_pbr2/src/render/pbr.wgsl
#import bevy_pbr::mesh_view_bind_group
#import bevy_pbr::mesh_bind_group
[[block]]
struct StandardMaterial {
base_color: vec4<f32>;
emissive: vec4<f32>;
perceptual_roughness: f32;
metallic: f32;
reflectance: f32;
flags: u32;
};
/* rest of PBR fragment shader here */
```
```rust
impl Plugin for MeshRenderPlugin {
fn build(&self, app: &mut bevy_app::App) {
let mut shaders = app.world.get_resource_mut::<Assets<Shader>>().unwrap();
shaders.set_untracked(
MESH_BIND_GROUP_HANDLE,
Shader::from_wgsl(include_str!("mesh_bind_group.wgsl"))
.with_import_path("bevy_pbr::mesh_bind_group"),
);
shaders.set_untracked(
MESH_VIEW_BIND_GROUP_HANDLE,
Shader::from_wgsl(include_str!("mesh_view_bind_group.wgsl"))
.with_import_path("bevy_pbr::mesh_view_bind_group"),
);
```
By convention these should use rust-style module paths that start with the crate name. Ultimately we might enforce this convention.
Note that this feature implements _run time_ import resolution. Ultimately we should move the import logic into an asset preprocessor once Bevy gets support for that.
## Decouple Mesh Logic from PBR Logic via MeshRenderPlugin
This breaks out mesh rendering code from PBR material code, which improves the legibility of the code, decouples mesh logic from PBR logic, and opens the door for a future `MaterialPlugin<T: Material>` that handles all of the pipeline setup for arbitrary shader materials.
## Removed `RenderAsset<Shader>` in favor of extracting shaders into RenderPipelineCache
This simplifies the shader import implementation and removes the need to pass around `RenderAssets<Shader>`.
## RenderCommands are now fallible
This allows us to cleanly handle pipelines+shaders not being ready yet. We can abort a render command early in these cases, preventing bevy from trying to bind group / do draw calls for pipelines that couldn't be bound. This could also be used in the future for things like "components not existing on entities yet".
# Next Steps
* Investigate using Naga for "partial typed imports" (ex: `#import bevy_pbr::material::StandardMaterial`, which would import only the StandardMaterial struct)
* Implement `MaterialPlugin<T: Material>` for low-boilerplate custom material shaders
* Move shader import logic into the asset preprocessor once bevy gets support for that.
Fixes #3132
2021-11-18 03:45:02 +00:00
|
|
|
sampler,
|
2022-04-25 13:54:46 +00:00
|
|
|
size: Vec2::new(
|
Add 2d meshes and materials (#3460)
# Objective
The current 2d rendering is specialized to render sprites, we need a generic way to render 2d items, using meshes and materials like we have for 3d.
## Solution
I cloned a good part of `bevy_pbr` into `bevy_sprite/src/mesh2d`, removed lighting and pbr itself, adapted it to 2d rendering, added a `ColorMaterial`, and modified the sprite rendering to break batches around 2d meshes.
~~The PR is a bit crude; I tried to change as little as I could in both the parts copied from 3d and the current sprite rendering to make reviewing easier. In the future, I expect we could make the sprite rendering a normal 2d material, cleanly integrated with the rest.~~ _edit: see <https://github.com/bevyengine/bevy/pull/3460#issuecomment-1003605194>_
## Remaining work
- ~~don't require mesh normals~~ _out of scope_
- ~~add an example~~ _done_
- support 2d meshes & materials in the UI?
- bikeshed names (I didn't think hard about naming, please check if it's fine)
## Remaining questions
- ~~should we add a depth buffer to 2d now that there are 2d meshes?~~ _let's revisit that when we have an opaque render phase_
- ~~should we add MSAA support to the sprites, or remove it from the 2d meshes?~~ _I added MSAA to sprites since it's really needed for 2d meshes_
- ~~how to customize vertex attributes?~~ _#3120_
Co-authored-by: Carter Anderson <mcanders1@gmail.com>
2022-01-08 01:29:08 +00:00
|
|
|
image.texture_descriptor.size.width as f32,
|
|
|
|
image.texture_descriptor.size.height as f32,
|
|
|
|
),
|
2023-02-20 00:02:40 +00:00
|
|
|
mip_level_count: image.texture_descriptor.mip_level_count,
|
Shader Imports. Decouple Mesh logic from PBR (#3137)
## Shader Imports
This adds "whole file" shader imports. These come in two flavors:
### Asset Path Imports
```rust
// /assets/shaders/custom.wgsl
#import "shaders/custom_material.wgsl"
[[stage(fragment)]]
fn fragment() -> [[location(0)]] vec4<f32> {
return get_color();
}
```
```rust
// /assets/shaders/custom_material.wgsl
[[block]]
struct CustomMaterial {
color: vec4<f32>;
};
[[group(1), binding(0)]]
var<uniform> material: CustomMaterial;
```
### Custom Path Imports
Enables defining custom import paths. These are intended to be used by crates to export shader functionality:
```rust
// bevy_pbr2/src/render/pbr.wgsl
#import bevy_pbr::mesh_view_bind_group
#import bevy_pbr::mesh_bind_group
[[block]]
struct StandardMaterial {
base_color: vec4<f32>;
emissive: vec4<f32>;
perceptual_roughness: f32;
metallic: f32;
reflectance: f32;
flags: u32;
};
/* rest of PBR fragment shader here */
```
```rust
impl Plugin for MeshRenderPlugin {
fn build(&self, app: &mut bevy_app::App) {
let mut shaders = app.world.get_resource_mut::<Assets<Shader>>().unwrap();
shaders.set_untracked(
MESH_BIND_GROUP_HANDLE,
Shader::from_wgsl(include_str!("mesh_bind_group.wgsl"))
.with_import_path("bevy_pbr::mesh_bind_group"),
);
shaders.set_untracked(
MESH_VIEW_BIND_GROUP_HANDLE,
Shader::from_wgsl(include_str!("mesh_view_bind_group.wgsl"))
.with_import_path("bevy_pbr::mesh_view_bind_group"),
);
```
By convention these should use rust-style module paths that start with the crate name. Ultimately we might enforce this convention.
Note that this feature implements _run time_ import resolution. Ultimately we should move the import logic into an asset preprocessor once Bevy gets support for that.
## Decouple Mesh Logic from PBR Logic via MeshRenderPlugin
This breaks out mesh rendering code from PBR material code, which improves the legibility of the code, decouples mesh logic from PBR logic, and opens the door for a future `MaterialPlugin<T: Material>` that handles all of the pipeline setup for arbitrary shader materials.
## Removed `RenderAsset<Shader>` in favor of extracting shaders into RenderPipelineCache
This simplifies the shader import implementation and removes the need to pass around `RenderAssets<Shader>`.
## RenderCommands are now fallible
This allows us to cleanly handle pipelines+shaders not being ready yet. We can abort a render command early in these cases, preventing bevy from trying to bind group / do draw calls for pipelines that couldn't be bound. This could also be used in the future for things like "components not existing on entities yet".
# Next Steps
* Investigate using Naga for "partial typed imports" (ex: `#import bevy_pbr::material::StandardMaterial`, which would import only the StandardMaterial struct)
* Implement `MaterialPlugin<T: Material>` for low-boilerplate custom material shaders
* Move shader import logic into the asset preprocessor once bevy gets support for that.
Fixes #3132
2021-11-18 03:45:02 +00:00
|
|
|
}
|
|
|
|
};
|
2022-09-28 04:20:27 +00:00
|
|
|
|
Shader Imports. Decouple Mesh logic from PBR (#3137)
## Shader Imports
This adds "whole file" shader imports. These come in two flavors:
### Asset Path Imports
```rust
// /assets/shaders/custom.wgsl
#import "shaders/custom_material.wgsl"
[[stage(fragment)]]
fn fragment() -> [[location(0)]] vec4<f32> {
return get_color();
}
```
```rust
// /assets/shaders/custom_material.wgsl
[[block]]
struct CustomMaterial {
color: vec4<f32>;
};
[[group(1), binding(0)]]
var<uniform> material: CustomMaterial;
```
### Custom Path Imports
Enables defining custom import paths. These are intended to be used by crates to export shader functionality:
```rust
// bevy_pbr2/src/render/pbr.wgsl
#import bevy_pbr::mesh_view_bind_group
#import bevy_pbr::mesh_bind_group
[[block]]
struct StandardMaterial {
base_color: vec4<f32>;
emissive: vec4<f32>;
perceptual_roughness: f32;
metallic: f32;
reflectance: f32;
flags: u32;
};
/* rest of PBR fragment shader here */
```
```rust
impl Plugin for MeshRenderPlugin {
fn build(&self, app: &mut bevy_app::App) {
let mut shaders = app.world.get_resource_mut::<Assets<Shader>>().unwrap();
shaders.set_untracked(
MESH_BIND_GROUP_HANDLE,
Shader::from_wgsl(include_str!("mesh_bind_group.wgsl"))
.with_import_path("bevy_pbr::mesh_bind_group"),
);
shaders.set_untracked(
MESH_VIEW_BIND_GROUP_HANDLE,
Shader::from_wgsl(include_str!("mesh_view_bind_group.wgsl"))
.with_import_path("bevy_pbr::mesh_view_bind_group"),
);
```
By convention these should use rust-style module paths that start with the crate name. Ultimately we might enforce this convention.
Note that this feature implements _run time_ import resolution. Ultimately we should move the import logic into an asset preprocessor once Bevy gets support for that.
## Decouple Mesh Logic from PBR Logic via MeshRenderPlugin
This breaks out mesh rendering code from PBR material code, which improves the legibility of the code, decouples mesh logic from PBR logic, and opens the door for a future `MaterialPlugin<T: Material>` that handles all of the pipeline setup for arbitrary shader materials.
## Removed `RenderAsset<Shader>` in favor of extracting shaders into RenderPipelineCache
This simplifies the shader import implementation and removes the need to pass around `RenderAssets<Shader>`.
## RenderCommands are now fallible
This allows us to cleanly handle pipelines+shaders not being ready yet. We can abort a render command early in these cases, preventing bevy from trying to bind group / do draw calls for pipelines that couldn't be bound. This could also be used in the future for things like "components not existing on entities yet".
# Next Steps
* Investigate using Naga for "partial typed imports" (ex: `#import bevy_pbr::material::StandardMaterial`, which would import only the StandardMaterial struct)
* Implement `MaterialPlugin<T: Material>` for low-boilerplate custom material shaders
* Move shader import logic into the asset preprocessor once bevy gets support for that.
Fixes #3132
2021-11-18 03:45:02 +00:00
|
|
|
MeshPipeline {
|
|
|
|
view_layout,
|
2023-01-19 22:11:13 +00:00
|
|
|
view_layout_multisampled,
|
2022-04-07 16:16:35 +00:00
|
|
|
clustered_forward_buffer_binding_type,
|
Shader Imports. Decouple Mesh logic from PBR (#3137)
## Shader Imports
This adds "whole file" shader imports. These come in two flavors:
### Asset Path Imports
```rust
// /assets/shaders/custom.wgsl
#import "shaders/custom_material.wgsl"
[[stage(fragment)]]
fn fragment() -> [[location(0)]] vec4<f32> {
return get_color();
}
```
```rust
// /assets/shaders/custom_material.wgsl
[[block]]
struct CustomMaterial {
color: vec4<f32>;
};
[[group(1), binding(0)]]
var<uniform> material: CustomMaterial;
```
### Custom Path Imports
Enables defining custom import paths. These are intended to be used by crates to export shader functionality:
```rust
// bevy_pbr2/src/render/pbr.wgsl
#import bevy_pbr::mesh_view_bind_group
#import bevy_pbr::mesh_bind_group
[[block]]
struct StandardMaterial {
base_color: vec4<f32>;
emissive: vec4<f32>;
perceptual_roughness: f32;
metallic: f32;
reflectance: f32;
flags: u32;
};
/* rest of PBR fragment shader here */
```
```rust
impl Plugin for MeshRenderPlugin {
fn build(&self, app: &mut bevy_app::App) {
let mut shaders = app.world.get_resource_mut::<Assets<Shader>>().unwrap();
shaders.set_untracked(
MESH_BIND_GROUP_HANDLE,
Shader::from_wgsl(include_str!("mesh_bind_group.wgsl"))
.with_import_path("bevy_pbr::mesh_bind_group"),
);
shaders.set_untracked(
MESH_VIEW_BIND_GROUP_HANDLE,
Shader::from_wgsl(include_str!("mesh_view_bind_group.wgsl"))
.with_import_path("bevy_pbr::mesh_view_bind_group"),
);
```
By convention these should use rust-style module paths that start with the crate name. Ultimately we might enforce this convention.
Note that this feature implements _run time_ import resolution. Ultimately we should move the import logic into an asset preprocessor once Bevy gets support for that.
## Decouple Mesh Logic from PBR Logic via MeshRenderPlugin
This breaks out mesh rendering code from PBR material code, which improves the legibility of the code, decouples mesh logic from PBR logic, and opens the door for a future `MaterialPlugin<T: Material>` that handles all of the pipeline setup for arbitrary shader materials.
## Removed `RenderAsset<Shader>` in favor of extracting shaders into RenderPipelineCache
This simplifies the shader import implementation and removes the need to pass around `RenderAssets<Shader>`.
## RenderCommands are now fallible
This allows us to cleanly handle pipelines+shaders not being ready yet. We can abort a render command early in these cases, preventing bevy from trying to bind group / do draw calls for pipelines that couldn't be bound. This could also be used in the future for things like "components not existing on entities yet".
# Next Steps
* Investigate using Naga for "partial typed imports" (ex: `#import bevy_pbr::material::StandardMaterial`, which would import only the StandardMaterial struct)
* Implement `MaterialPlugin<T: Material>` for low-boilerplate custom material shaders
* Move shader import logic into the asset preprocessor once bevy gets support for that.
Fixes #3132
2021-11-18 03:45:02 +00:00
|
|
|
dummy_white_gpu_image,
|
Add morph targets (#8158)
# Objective
- Add morph targets to `bevy_pbr` (closes #5756) & load them from glTF
- Supersedes #3722
- Fixes #6814
[Morph targets][1] (also known as shape interpolation, shape keys, or
blend shapes) allow animating individual vertices with fine grained
controls. This is typically used for facial expressions. By specifying
multiple poses as vertex offset, and providing a set of weight of each
pose, it is possible to define surprisingly realistic transitions
between poses. Blending between multiple poses also allow composition.
Morph targets are part of the [gltf standard][2] and are a feature of
Unity and Unreal, and babylone.js, it is only natural to implement them
in bevy.
## Solution
This implementation of morph targets uses a 3d texture where each pixel
is a component of an animated attribute. Each layer is a different
target. We use a 2d texture for each target, because the number of
attribute×components×animated vertices is expected to always exceed the
maximum pixel row size limit of webGL2. It copies fairly closely the way
skinning is implemented on the CPU side, while on the GPU side, the
shader morph target implementation is a relatively trivial detail.
We add an optional `morph_texture` to the `Mesh` struct. The
`morph_texture` is built through a method that accepts an iterator over
attribute buffers.
The `MorphWeights` component, user-accessible, controls the blend of
poses used by mesh instances (so that multiple copy of the same mesh may
have different weights), all the weights are uploaded to a uniform
buffer of 256 `f32`. We limit to 16 poses per mesh, and a total of 256
poses.
More literature:
* Old babylone.js implementation (vertex attribute-based):
https://www.eternalcoding.com/dev-log-1-morph-targets/
* Babylone.js implementation (similar to ours):
https://www.youtube.com/watch?v=LBPRmGgU0PE
* GPU gems 3:
https://developer.nvidia.com/gpugems/gpugems3/part-i-geometry/chapter-3-directx-10-blend-shapes-breaking-limits
* Development discord thread
https://discord.com/channels/691052431525675048/1083325980615114772
https://user-images.githubusercontent.com/26321040/231181046-3bca2ab2-d4d9-472e-8098-639f1871ce2e.mp4
https://github.com/bevyengine/bevy/assets/26321040/d2a0c544-0ef8-45cf-9f99-8c3792f5a258
## Acknowledgements
* Thanks to `storytold` for sponsoring the feature
* Thanks to `superdump` and `james7132` for guidance and help figuring
out stuff
## Future work
- Handling of less and more attributes (eg: animated uv, animated
arbitrary attributes)
- Dynamic pose allocation (so that zero-weighted poses aren't uploaded
to GPU for example, enables much more total poses)
- Better animation API, see #8357
----
## Changelog
- Add morph targets to bevy meshes
- Support up to 64 poses per mesh of individually up to 116508 vertices,
animation currently strictly limited to the position, normal and tangent
attributes.
- Load a morph target using `Mesh::set_morph_targets`
- Add `VisitMorphTargets` and `VisitMorphAttributes` traits to
`bevy_render`, this allows defining morph targets (a fairly complex and
nested data structure) through iterators (ie: single copy instead of
passing around buffers), see documentation of those traits for details
- Add `MorphWeights` component exported by `bevy_render`
- `MorphWeights` control mesh's morph target weights, blending between
various poses defined as morph targets.
- `MorphWeights` are directly inherited by direct children (single level
of hierarchy) of an entity. This allows controlling several mesh
primitives through a unique entity _as per GLTF spec_.
- Add `MorphTargetNames` component, naming each indices of loaded morph
targets.
- Load morph targets weights and buffers in `bevy_gltf`
- handle morph targets animations in `bevy_animation` (previously, it
was a `warn!` log)
- Add the `MorphStressTest.gltf` asset for morph targets testing, taken
from the glTF samples repo, CC0.
- Add morph target manipulation to `scene_viewer`
- Separate the animation code in `scene_viewer` from the rest of the
code, reducing `#[cfg(feature)]` noise
- Add the `morph_targets.rs` example to show off how to manipulate morph
targets, loading `MorpStressTest.gltf`
## Migration Guide
- (very specialized, unlikely to be touched by 3rd parties)
- `MeshPipeline` now has a single `mesh_layouts` field rather than
separate `mesh_layout` and `skinned_mesh_layout` fields. You should
handle all possible mesh bind group layouts in your implementation
- You should also handle properly the new `MORPH_TARGETS` shader def and
mesh pipeline key. A new function is exposed to make this easier:
`setup_moprh_and_skinning_defs`
- The `MeshBindGroup` is now `MeshBindGroups`, cached bind groups are
now accessed through the `get` method.
[1]: https://en.wikipedia.org/wiki/Morph_target_animation
[2]:
https://registry.khronos.org/glTF/specs/2.0/glTF-2.0.html#morph-targets
---------
Co-authored-by: François <mockersf@gmail.com>
Co-authored-by: Carter Anderson <mcanders1@gmail.com>
2023-06-22 20:00:01 +00:00
|
|
|
mesh_layouts: MeshLayouts::new(&render_device),
|
2023-07-31 13:14:40 +00:00
|
|
|
per_object_buffer_batch_size: GpuArrayBuffer::<MeshUniform>::batch_size(&render_device),
|
Shader Imports. Decouple Mesh logic from PBR (#3137)
## Shader Imports
This adds "whole file" shader imports. These come in two flavors:
### Asset Path Imports
```rust
// /assets/shaders/custom.wgsl
#import "shaders/custom_material.wgsl"
[[stage(fragment)]]
fn fragment() -> [[location(0)]] vec4<f32> {
return get_color();
}
```
```rust
// /assets/shaders/custom_material.wgsl
[[block]]
struct CustomMaterial {
color: vec4<f32>;
};
[[group(1), binding(0)]]
var<uniform> material: CustomMaterial;
```
### Custom Path Imports
Enables defining custom import paths. These are intended to be used by crates to export shader functionality:
```rust
// bevy_pbr2/src/render/pbr.wgsl
#import bevy_pbr::mesh_view_bind_group
#import bevy_pbr::mesh_bind_group
[[block]]
struct StandardMaterial {
base_color: vec4<f32>;
emissive: vec4<f32>;
perceptual_roughness: f32;
metallic: f32;
reflectance: f32;
flags: u32;
};
/* rest of PBR fragment shader here */
```
```rust
impl Plugin for MeshRenderPlugin {
fn build(&self, app: &mut bevy_app::App) {
let mut shaders = app.world.get_resource_mut::<Assets<Shader>>().unwrap();
shaders.set_untracked(
MESH_BIND_GROUP_HANDLE,
Shader::from_wgsl(include_str!("mesh_bind_group.wgsl"))
.with_import_path("bevy_pbr::mesh_bind_group"),
);
shaders.set_untracked(
MESH_VIEW_BIND_GROUP_HANDLE,
Shader::from_wgsl(include_str!("mesh_view_bind_group.wgsl"))
.with_import_path("bevy_pbr::mesh_view_bind_group"),
);
```
By convention these should use rust-style module paths that start with the crate name. Ultimately we might enforce this convention.
Note that this feature implements _run time_ import resolution. Ultimately we should move the import logic into an asset preprocessor once Bevy gets support for that.
## Decouple Mesh Logic from PBR Logic via MeshRenderPlugin
This breaks out mesh rendering code from PBR material code, which improves the legibility of the code, decouples mesh logic from PBR logic, and opens the door for a future `MaterialPlugin<T: Material>` that handles all of the pipeline setup for arbitrary shader materials.
## Removed `RenderAsset<Shader>` in favor of extracting shaders into RenderPipelineCache
This simplifies the shader import implementation and removes the need to pass around `RenderAssets<Shader>`.
## RenderCommands are now fallible
This allows us to cleanly handle pipelines+shaders not being ready yet. We can abort a render command early in these cases, preventing bevy from trying to bind group / do draw calls for pipelines that couldn't be bound. This could also be used in the future for things like "components not existing on entities yet".
# Next Steps
* Investigate using Naga for "partial typed imports" (ex: `#import bevy_pbr::material::StandardMaterial`, which would import only the StandardMaterial struct)
* Implement `MaterialPlugin<T: Material>` for low-boilerplate custom material shaders
* Move shader import logic into the asset preprocessor once bevy gets support for that.
Fixes #3132
2021-11-18 03:45:02 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
impl MeshPipeline {
|
|
|
|
pub fn get_image_texture<'a>(
|
|
|
|
&'a self,
|
|
|
|
gpu_images: &'a RenderAssets<Image>,
|
|
|
|
handle_option: &Option<Handle<Image>>,
|
|
|
|
) -> Option<(&'a TextureView, &'a Sampler)> {
|
|
|
|
if let Some(handle) = handle_option {
|
|
|
|
let gpu_image = gpu_images.get(handle)?;
|
|
|
|
Some((&gpu_image.texture_view, &gpu_image.sampler))
|
|
|
|
} else {
|
|
|
|
Some((
|
|
|
|
&self.dummy_white_gpu_image.texture_view,
|
|
|
|
&self.dummy_white_gpu_image.sampler,
|
|
|
|
))
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
bitflags::bitflags! {
|
2023-06-01 08:41:42 +00:00
|
|
|
#[derive(Clone, Copy, Debug, PartialEq, Eq, Hash)]
|
Shader Imports. Decouple Mesh logic from PBR (#3137)
## Shader Imports
This adds "whole file" shader imports. These come in two flavors:
### Asset Path Imports
```rust
// /assets/shaders/custom.wgsl
#import "shaders/custom_material.wgsl"
[[stage(fragment)]]
fn fragment() -> [[location(0)]] vec4<f32> {
return get_color();
}
```
```rust
// /assets/shaders/custom_material.wgsl
[[block]]
struct CustomMaterial {
color: vec4<f32>;
};
[[group(1), binding(0)]]
var<uniform> material: CustomMaterial;
```
### Custom Path Imports
Enables defining custom import paths. These are intended to be used by crates to export shader functionality:
```rust
// bevy_pbr2/src/render/pbr.wgsl
#import bevy_pbr::mesh_view_bind_group
#import bevy_pbr::mesh_bind_group
[[block]]
struct StandardMaterial {
base_color: vec4<f32>;
emissive: vec4<f32>;
perceptual_roughness: f32;
metallic: f32;
reflectance: f32;
flags: u32;
};
/* rest of PBR fragment shader here */
```
```rust
impl Plugin for MeshRenderPlugin {
fn build(&self, app: &mut bevy_app::App) {
let mut shaders = app.world.get_resource_mut::<Assets<Shader>>().unwrap();
shaders.set_untracked(
MESH_BIND_GROUP_HANDLE,
Shader::from_wgsl(include_str!("mesh_bind_group.wgsl"))
.with_import_path("bevy_pbr::mesh_bind_group"),
);
shaders.set_untracked(
MESH_VIEW_BIND_GROUP_HANDLE,
Shader::from_wgsl(include_str!("mesh_view_bind_group.wgsl"))
.with_import_path("bevy_pbr::mesh_view_bind_group"),
);
```
By convention these should use rust-style module paths that start with the crate name. Ultimately we might enforce this convention.
Note that this feature implements _run time_ import resolution. Ultimately we should move the import logic into an asset preprocessor once Bevy gets support for that.
## Decouple Mesh Logic from PBR Logic via MeshRenderPlugin
This breaks out mesh rendering code from PBR material code, which improves the legibility of the code, decouples mesh logic from PBR logic, and opens the door for a future `MaterialPlugin<T: Material>` that handles all of the pipeline setup for arbitrary shader materials.
## Removed `RenderAsset<Shader>` in favor of extracting shaders into RenderPipelineCache
This simplifies the shader import implementation and removes the need to pass around `RenderAssets<Shader>`.
## RenderCommands are now fallible
This allows us to cleanly handle pipelines+shaders not being ready yet. We can abort a render command early in these cases, preventing bevy from trying to bind group / do draw calls for pipelines that couldn't be bound. This could also be used in the future for things like "components not existing on entities yet".
# Next Steps
* Investigate using Naga for "partial typed imports" (ex: `#import bevy_pbr::material::StandardMaterial`, which would import only the StandardMaterial struct)
* Implement `MaterialPlugin<T: Material>` for low-boilerplate custom material shaders
* Move shader import logic into the asset preprocessor once bevy gets support for that.
Fixes #3132
2021-11-18 03:45:02 +00:00
|
|
|
#[repr(transparent)]
|
|
|
|
// NOTE: Apparently quadro drivers support up to 64x MSAA.
|
2022-08-30 03:00:39 +00:00
|
|
|
/// MSAA uses the highest 3 bits for the MSAA log2(sample count) to support up to 128x MSAA.
|
Shader Imports. Decouple Mesh logic from PBR (#3137)
## Shader Imports
This adds "whole file" shader imports. These come in two flavors:
### Asset Path Imports
```rust
// /assets/shaders/custom.wgsl
#import "shaders/custom_material.wgsl"
[[stage(fragment)]]
fn fragment() -> [[location(0)]] vec4<f32> {
return get_color();
}
```
```rust
// /assets/shaders/custom_material.wgsl
[[block]]
struct CustomMaterial {
color: vec4<f32>;
};
[[group(1), binding(0)]]
var<uniform> material: CustomMaterial;
```
### Custom Path Imports
Enables defining custom import paths. These are intended to be used by crates to export shader functionality:
```rust
// bevy_pbr2/src/render/pbr.wgsl
#import bevy_pbr::mesh_view_bind_group
#import bevy_pbr::mesh_bind_group
[[block]]
struct StandardMaterial {
base_color: vec4<f32>;
emissive: vec4<f32>;
perceptual_roughness: f32;
metallic: f32;
reflectance: f32;
flags: u32;
};
/* rest of PBR fragment shader here */
```
```rust
impl Plugin for MeshRenderPlugin {
fn build(&self, app: &mut bevy_app::App) {
let mut shaders = app.world.get_resource_mut::<Assets<Shader>>().unwrap();
shaders.set_untracked(
MESH_BIND_GROUP_HANDLE,
Shader::from_wgsl(include_str!("mesh_bind_group.wgsl"))
.with_import_path("bevy_pbr::mesh_bind_group"),
);
shaders.set_untracked(
MESH_VIEW_BIND_GROUP_HANDLE,
Shader::from_wgsl(include_str!("mesh_view_bind_group.wgsl"))
.with_import_path("bevy_pbr::mesh_view_bind_group"),
);
```
By convention these should use rust-style module paths that start with the crate name. Ultimately we might enforce this convention.
Note that this feature implements _run time_ import resolution. Ultimately we should move the import logic into an asset preprocessor once Bevy gets support for that.
## Decouple Mesh Logic from PBR Logic via MeshRenderPlugin
This breaks out mesh rendering code from PBR material code, which improves the legibility of the code, decouples mesh logic from PBR logic, and opens the door for a future `MaterialPlugin<T: Material>` that handles all of the pipeline setup for arbitrary shader materials.
## Removed `RenderAsset<Shader>` in favor of extracting shaders into RenderPipelineCache
This simplifies the shader import implementation and removes the need to pass around `RenderAssets<Shader>`.
## RenderCommands are now fallible
This allows us to cleanly handle pipelines+shaders not being ready yet. We can abort a render command early in these cases, preventing bevy from trying to bind group / do draw calls for pipelines that couldn't be bound. This could also be used in the future for things like "components not existing on entities yet".
# Next Steps
* Investigate using Naga for "partial typed imports" (ex: `#import bevy_pbr::material::StandardMaterial`, which would import only the StandardMaterial struct)
* Implement `MaterialPlugin<T: Material>` for low-boilerplate custom material shaders
* Move shader import logic into the asset preprocessor once bevy gets support for that.
Fixes #3132
2021-11-18 03:45:02 +00:00
|
|
|
pub struct MeshPipelineKey: u32 {
|
2023-02-19 20:38:13 +00:00
|
|
|
const NONE = 0;
|
|
|
|
const HDR = (1 << 0);
|
|
|
|
const TONEMAP_IN_SHADER = (1 << 1);
|
|
|
|
const DEBAND_DITHER = (1 << 2);
|
|
|
|
const DEPTH_PREPASS = (1 << 3);
|
|
|
|
const NORMAL_PREPASS = (1 << 4);
|
Temporal Antialiasing (TAA) (#7291)
![image](https://user-images.githubusercontent.com/47158642/214374911-412f0986-3927-4f7a-9a6c-413bdee6b389.png)
# Objective
- Implement an alternative antialias technique
- TAA scales based off of view resolution, not geometry complexity
- TAA filters textures, firefly pixels, and other aliasing not covered
by MSAA
- TAA additionally will reduce noise / increase quality in future
stochastic rendering techniques
- Closes https://github.com/bevyengine/bevy/issues/3663
## Solution
- Add a temporal jitter component
- Add a motion vector prepass
- Add a TemporalAntialias component and plugin
- Combine existing MSAA and FXAA examples and add TAA
## Followup Work
- Prepass motion vector support for skinned meshes
- Move uniforms needed for motion vectors into a separate bind group,
instead of using different bind group layouts
- Reuse previous frame's GPU view buffer for motion vectors, instead of
recomputing
- Mip biasing for sharper textures, and or unjitter texture UVs
https://github.com/bevyengine/bevy/issues/7323
- Compute shader for better performance
- Investigate FSR techniques
- Historical depth based disocclusion tests, for geometry disocclusion
- Historical luminance/hue based tests, for shading disocclusion
- Pixel "locks" to reduce blending rate / revamp history confidence
mechanism
- Orthographic camera support for TemporalJitter
- Figure out COD's 1-tap bicubic filter
---
## Changelog
- Added MotionVectorPrepass and TemporalJitter
- Added TemporalAntialiasPlugin, TemporalAntialiasBundle, and
TemporalAntialiasSettings
---------
Co-authored-by: IceSentry <c.giguere42@gmail.com>
Co-authored-by: IceSentry <IceSentry@users.noreply.github.com>
Co-authored-by: Robert Swain <robert.swain@gmail.com>
Co-authored-by: Daniel Chia <danstryder@gmail.com>
Co-authored-by: robtfm <50659922+robtfm@users.noreply.github.com>
Co-authored-by: Brandon Dyer <brandondyer64@gmail.com>
Co-authored-by: Edgar Geier <geieredgar@gmail.com>
2023-03-27 22:22:40 +00:00
|
|
|
const MOTION_VECTOR_PREPASS = (1 << 5);
|
Add `MAY_DISCARD` shader def, enabling early depth tests for most cases (#6697)
# Objective
- Right now we can't really benefit from [early depth
testing](https://www.khronos.org/opengl/wiki/Early_Fragment_Test) in our
PBR shader because it includes codepaths with `discard`, even for
situations where they are not necessary.
## Solution
- This PR introduces a new `MeshPipelineKey` and shader def,
`MAY_DISCARD`;
- All possible material/mesh options that that may result in `discard`s
being needed must set `MAY_DISCARD` ahead of time:
- Right now, this is only `AlphaMode::Mask(f32)`, but in the future
might include other options/effects; (e.g. one effect I'm personally
interested in is bayer dither pseudo-transparency for LOD transitions of
opaque meshes)
- Shader codepaths that can `discard` are guarded by an `#ifdef
MAY_DISCARD` preprocessor directive:
- Right now, this is just one branch in `alpha_discard()`;
- If `MAY_DISCARD` is _not_ set, the `@early_depth_test` attribute is
added to the PBR fragment shader. This is a not yet documented, possibly
non-standard WGSL extension I found browsing Naga's source code. [I
opened a PR to document it
there](https://github.com/gfx-rs/naga/pull/2132). My understanding is
that for backends where this attribute is supported, it will force an
explicit opt-in to early depth test. (e.g. via
`layout(early_fragment_tests) in;` in GLSL)
## Caveats
- I included `@early_depth_test` for the sake of us being explicit, and
avoiding the need for the driver to be “smart” about enabling this
feature. That way, if we make a mistake and include a `discard`
unguarded by `MAY_DISCARD`, it will either produce errors or noticeable
visual artifacts so that we'll catch early, instead of causing a
performance regression.
- I'm not sure explicit early depth test is supported on the naga Metal
backend, which is what I'm currently using, so I can't really test the
explicit early depth test enable, I would like others with Vulkan/GL
hardware to test it if possible;
- I would like some guidance on how to measure/verify the performance
benefits of this;
- If I understand it correctly, this, or _something like this_ is needed
to fully reap the performance gains enabled by #6284;
- This will _most definitely_ conflict with #6284 and #6644. I can fix
the conflicts as needed, depending on whether/the order they end up
being merging in.
---
## Changelog
### Changed
- Early depth tests are now enabled whenever possible for meshes using
`StandardMaterial`, reducing the number of fragments evaluated for
scenes with lots of occlusions.
2023-05-29 15:15:01 +00:00
|
|
|
const MAY_DISCARD = (1 << 6); // Guards shader codepaths that may discard, allowing early depth tests in most cases
|
|
|
|
// See: https://www.khronos.org/opengl/wiki/Early_Fragment_Test
|
Temporal Antialiasing (TAA) (#7291)
![image](https://user-images.githubusercontent.com/47158642/214374911-412f0986-3927-4f7a-9a6c-413bdee6b389.png)
# Objective
- Implement an alternative antialias technique
- TAA scales based off of view resolution, not geometry complexity
- TAA filters textures, firefly pixels, and other aliasing not covered
by MSAA
- TAA additionally will reduce noise / increase quality in future
stochastic rendering techniques
- Closes https://github.com/bevyengine/bevy/issues/3663
## Solution
- Add a temporal jitter component
- Add a motion vector prepass
- Add a TemporalAntialias component and plugin
- Combine existing MSAA and FXAA examples and add TAA
## Followup Work
- Prepass motion vector support for skinned meshes
- Move uniforms needed for motion vectors into a separate bind group,
instead of using different bind group layouts
- Reuse previous frame's GPU view buffer for motion vectors, instead of
recomputing
- Mip biasing for sharper textures, and or unjitter texture UVs
https://github.com/bevyengine/bevy/issues/7323
- Compute shader for better performance
- Investigate FSR techniques
- Historical depth based disocclusion tests, for geometry disocclusion
- Historical luminance/hue based tests, for shading disocclusion
- Pixel "locks" to reduce blending rate / revamp history confidence
mechanism
- Orthographic camera support for TemporalJitter
- Figure out COD's 1-tap bicubic filter
---
## Changelog
- Added MotionVectorPrepass and TemporalJitter
- Added TemporalAntialiasPlugin, TemporalAntialiasBundle, and
TemporalAntialiasSettings
---------
Co-authored-by: IceSentry <c.giguere42@gmail.com>
Co-authored-by: IceSentry <IceSentry@users.noreply.github.com>
Co-authored-by: Robert Swain <robert.swain@gmail.com>
Co-authored-by: Daniel Chia <danstryder@gmail.com>
Co-authored-by: robtfm <50659922+robtfm@users.noreply.github.com>
Co-authored-by: Brandon Dyer <brandondyer64@gmail.com>
Co-authored-by: Edgar Geier <geieredgar@gmail.com>
2023-03-27 22:22:40 +00:00
|
|
|
const ENVIRONMENT_MAP = (1 << 7);
|
Screen Space Ambient Occlusion (SSAO) MVP (#7402)
![image](https://github.com/bevyengine/bevy/assets/47158642/dbb62645-f639-4f2b-b84b-26fd915c186d)
# Objective
- Add Screen space ambient occlusion (SSAO). SSAO approximates
small-scale, local occlusion of _indirect_ diffuse light between
objects. SSAO does not apply to direct lighting, such as point or
directional lights.
- This darkens creases, e.g. on staircases, and gives nice contact
shadows where objects meet, giving entities a more "grounded" feel.
- Closes https://github.com/bevyengine/bevy/issues/3632.
## Solution
- Implement the GTAO algorithm.
-
https://www.activision.com/cdn/research/Practical_Real_Time_Strategies_for_Accurate_Indirect_Occlusion_NEW%20VERSION_COLOR.pdf
-
https://blog.selfshadow.com/publications/s2016-shading-course/activision/s2016_pbs_activision_occlusion.pdf
- Source code heavily based on [Intel's
XeGTAO](https://github.com/GameTechDev/XeGTAO/blob/0d177ce06bfa642f64d8af4de1197ad1bcb862d4/Source/Rendering/Shaders/XeGTAO.hlsli).
- Add an SSAO bevy example.
## Algorithm Overview
* Run a depth and normal prepass
* Create downscaled mips of the depth texture (preprocess_depths pass)
* GTAO pass - for each pixel, take several random samples from the
depth+normal buffers, reconstruct world position, raytrace in screen
space to estimate occlusion. Rather then doing completely random samples
on a hemisphere, you choose random _slices_ of the hemisphere, and then
can analytically compute the full occlusion of that slice. Also compute
edges based on depth differences here.
* Spatial denoise pass - bilateral blur, using edge detection to not
blur over edges. This is the final SSAO result.
* Main pass - if SSAO exists, sample the SSAO texture, and set occlusion
to be the minimum of ssao/material occlusion. This then feeds into the
rest of the PBR shader as normal.
---
## Future Improvements
- Maybe remove the low quality preset for now (too noisy)
- WebGPU fallback (see below)
- Faster depth->world position (see reverted code)
- Bent normals
- Try interleaved gradient noise or spatiotemporal blue noise
- Replace the spatial denoiser with a combined spatial+temporal denoiser
- Render at half resolution and use a bilateral upsample
- Better multibounce approximation
(https://drive.google.com/file/d/1SyagcEVplIm2KkRD3WQYSO9O0Iyi1hfy/view)
## Far-Future Performance Improvements
- F16 math (missing naga-wgsl support
https://github.com/gfx-rs/naga/issues/1884)
- Faster coordinate space conversion for normals
- Faster depth mipchain creation
(https://github.com/GPUOpen-Effects/FidelityFX-SPD) (wgpu/naga does not
currently support subgroup ops)
- Deinterleaved SSAO for better cache efficiency
(https://developer.nvidia.com/sites/default/files/akamai/gameworks/samples/DeinterleavedTexturing.pdf)
## Other Interesting Papers
- Visibility bitmask
(https://link.springer.com/article/10.1007/s00371-022-02703-y,
https://cdrinmatane.github.io/posts/cgspotlight-slides/)
- Screen space diffuse lighting
(https://github.com/Patapom/GodComplex/blob/master/Tests/TestHBIL/2018%20Mayaux%20-%20Horizon-Based%20Indirect%20Lighting%20(HBIL).pdf)
## Platform Support
* SSAO currently does not work on DirectX12 due to issues with wgpu and
naga:
* https://github.com/gfx-rs/wgpu/pull/3798
* https://github.com/gfx-rs/naga/pull/2353
* SSAO currently does not work on WebGPU because r16float is not a valid
storage texture format
https://gpuweb.github.io/gpuweb/wgsl/#storage-texel-formats. We can fix
this with a fallback to r32float.
---
## Changelog
- Added ScreenSpaceAmbientOcclusionSettings,
ScreenSpaceAmbientOcclusionQualityLevel, and
ScreenSpaceAmbientOcclusionBundle
---------
Co-authored-by: IceSentry <c.giguere42@gmail.com>
Co-authored-by: IceSentry <IceSentry@users.noreply.github.com>
Co-authored-by: Daniel Chia <danstryder@gmail.com>
Co-authored-by: Elabajaba <Elabajaba@users.noreply.github.com>
Co-authored-by: Robert Swain <robert.swain@gmail.com>
Co-authored-by: robtfm <50659922+robtfm@users.noreply.github.com>
Co-authored-by: Brandon Dyer <brandondyer64@gmail.com>
Co-authored-by: Edgar Geier <geieredgar@gmail.com>
Co-authored-by: Nicola Papale <nicopap@users.noreply.github.com>
Co-authored-by: Carter Anderson <mcanders1@gmail.com>
2023-06-18 21:05:55 +00:00
|
|
|
const SCREEN_SPACE_AMBIENT_OCCLUSION = (1 << 8);
|
|
|
|
const DEPTH_CLAMP_ORTHO = (1 << 9);
|
|
|
|
const TAA = (1 << 10);
|
Add morph targets (#8158)
# Objective
- Add morph targets to `bevy_pbr` (closes #5756) & load them from glTF
- Supersedes #3722
- Fixes #6814
[Morph targets][1] (also known as shape interpolation, shape keys, or
blend shapes) allow animating individual vertices with fine grained
controls. This is typically used for facial expressions. By specifying
multiple poses as vertex offset, and providing a set of weight of each
pose, it is possible to define surprisingly realistic transitions
between poses. Blending between multiple poses also allow composition.
Morph targets are part of the [gltf standard][2] and are a feature of
Unity and Unreal, and babylone.js, it is only natural to implement them
in bevy.
## Solution
This implementation of morph targets uses a 3d texture where each pixel
is a component of an animated attribute. Each layer is a different
target. We use a 2d texture for each target, because the number of
attribute×components×animated vertices is expected to always exceed the
maximum pixel row size limit of webGL2. It copies fairly closely the way
skinning is implemented on the CPU side, while on the GPU side, the
shader morph target implementation is a relatively trivial detail.
We add an optional `morph_texture` to the `Mesh` struct. The
`morph_texture` is built through a method that accepts an iterator over
attribute buffers.
The `MorphWeights` component, user-accessible, controls the blend of
poses used by mesh instances (so that multiple copy of the same mesh may
have different weights), all the weights are uploaded to a uniform
buffer of 256 `f32`. We limit to 16 poses per mesh, and a total of 256
poses.
More literature:
* Old babylone.js implementation (vertex attribute-based):
https://www.eternalcoding.com/dev-log-1-morph-targets/
* Babylone.js implementation (similar to ours):
https://www.youtube.com/watch?v=LBPRmGgU0PE
* GPU gems 3:
https://developer.nvidia.com/gpugems/gpugems3/part-i-geometry/chapter-3-directx-10-blend-shapes-breaking-limits
* Development discord thread
https://discord.com/channels/691052431525675048/1083325980615114772
https://user-images.githubusercontent.com/26321040/231181046-3bca2ab2-d4d9-472e-8098-639f1871ce2e.mp4
https://github.com/bevyengine/bevy/assets/26321040/d2a0c544-0ef8-45cf-9f99-8c3792f5a258
## Acknowledgements
* Thanks to `storytold` for sponsoring the feature
* Thanks to `superdump` and `james7132` for guidance and help figuring
out stuff
## Future work
- Handling of less and more attributes (eg: animated uv, animated
arbitrary attributes)
- Dynamic pose allocation (so that zero-weighted poses aren't uploaded
to GPU for example, enables much more total poses)
- Better animation API, see #8357
----
## Changelog
- Add morph targets to bevy meshes
- Support up to 64 poses per mesh of individually up to 116508 vertices,
animation currently strictly limited to the position, normal and tangent
attributes.
- Load a morph target using `Mesh::set_morph_targets`
- Add `VisitMorphTargets` and `VisitMorphAttributes` traits to
`bevy_render`, this allows defining morph targets (a fairly complex and
nested data structure) through iterators (ie: single copy instead of
passing around buffers), see documentation of those traits for details
- Add `MorphWeights` component exported by `bevy_render`
- `MorphWeights` control mesh's morph target weights, blending between
various poses defined as morph targets.
- `MorphWeights` are directly inherited by direct children (single level
of hierarchy) of an entity. This allows controlling several mesh
primitives through a unique entity _as per GLTF spec_.
- Add `MorphTargetNames` component, naming each indices of loaded morph
targets.
- Load morph targets weights and buffers in `bevy_gltf`
- handle morph targets animations in `bevy_animation` (previously, it
was a `warn!` log)
- Add the `MorphStressTest.gltf` asset for morph targets testing, taken
from the glTF samples repo, CC0.
- Add morph target manipulation to `scene_viewer`
- Separate the animation code in `scene_viewer` from the rest of the
code, reducing `#[cfg(feature)]` noise
- Add the `morph_targets.rs` example to show off how to manipulate morph
targets, loading `MorpStressTest.gltf`
## Migration Guide
- (very specialized, unlikely to be touched by 3rd parties)
- `MeshPipeline` now has a single `mesh_layouts` field rather than
separate `mesh_layout` and `skinned_mesh_layout` fields. You should
handle all possible mesh bind group layouts in your implementation
- You should also handle properly the new `MORPH_TARGETS` shader def and
mesh pipeline key. A new function is exposed to make this easier:
`setup_moprh_and_skinning_defs`
- The `MeshBindGroup` is now `MeshBindGroups`, cached bind groups are
now accessed through the `get` method.
[1]: https://en.wikipedia.org/wiki/Morph_target_animation
[2]:
https://registry.khronos.org/glTF/specs/2.0/glTF-2.0.html#morph-targets
---------
Co-authored-by: François <mockersf@gmail.com>
Co-authored-by: Carter Anderson <mcanders1@gmail.com>
2023-06-22 20:00:01 +00:00
|
|
|
const MORPH_TARGETS = (1 << 11);
|
2023-02-19 20:38:13 +00:00
|
|
|
const BLEND_RESERVED_BITS = Self::BLEND_MASK_BITS << Self::BLEND_SHIFT_BITS; // ← Bitmask reserving bits for the blend state
|
|
|
|
const BLEND_OPAQUE = (0 << Self::BLEND_SHIFT_BITS); // ← Values are just sequential within the mask, and can range from 0 to 3
|
|
|
|
const BLEND_PREMULTIPLIED_ALPHA = (1 << Self::BLEND_SHIFT_BITS); //
|
|
|
|
const BLEND_MULTIPLY = (2 << Self::BLEND_SHIFT_BITS); // ← We still have room for one more value without adding more bits
|
2023-03-05 00:17:44 +00:00
|
|
|
const BLEND_ALPHA = (3 << Self::BLEND_SHIFT_BITS);
|
2023-02-19 20:38:13 +00:00
|
|
|
const MSAA_RESERVED_BITS = Self::MSAA_MASK_BITS << Self::MSAA_SHIFT_BITS;
|
|
|
|
const PRIMITIVE_TOPOLOGY_RESERVED_BITS = Self::PRIMITIVE_TOPOLOGY_MASK_BITS << Self::PRIMITIVE_TOPOLOGY_SHIFT_BITS;
|
|
|
|
const TONEMAP_METHOD_RESERVED_BITS = Self::TONEMAP_METHOD_MASK_BITS << Self::TONEMAP_METHOD_SHIFT_BITS;
|
|
|
|
const TONEMAP_METHOD_NONE = 0 << Self::TONEMAP_METHOD_SHIFT_BITS;
|
|
|
|
const TONEMAP_METHOD_REINHARD = 1 << Self::TONEMAP_METHOD_SHIFT_BITS;
|
|
|
|
const TONEMAP_METHOD_REINHARD_LUMINANCE = 2 << Self::TONEMAP_METHOD_SHIFT_BITS;
|
|
|
|
const TONEMAP_METHOD_ACES_FITTED = 3 << Self::TONEMAP_METHOD_SHIFT_BITS;
|
|
|
|
const TONEMAP_METHOD_AGX = 4 << Self::TONEMAP_METHOD_SHIFT_BITS;
|
|
|
|
const TONEMAP_METHOD_SOMEWHAT_BORING_DISPLAY_TRANSFORM = 5 << Self::TONEMAP_METHOD_SHIFT_BITS;
|
|
|
|
const TONEMAP_METHOD_TONY_MC_MAPFACE = 6 << Self::TONEMAP_METHOD_SHIFT_BITS;
|
|
|
|
const TONEMAP_METHOD_BLENDER_FILMIC = 7 << Self::TONEMAP_METHOD_SHIFT_BITS;
|
Shader Imports. Decouple Mesh logic from PBR (#3137)
## Shader Imports
This adds "whole file" shader imports. These come in two flavors:
### Asset Path Imports
```rust
// /assets/shaders/custom.wgsl
#import "shaders/custom_material.wgsl"
[[stage(fragment)]]
fn fragment() -> [[location(0)]] vec4<f32> {
return get_color();
}
```
```rust
// /assets/shaders/custom_material.wgsl
[[block]]
struct CustomMaterial {
color: vec4<f32>;
};
[[group(1), binding(0)]]
var<uniform> material: CustomMaterial;
```
### Custom Path Imports
Enables defining custom import paths. These are intended to be used by crates to export shader functionality:
```rust
// bevy_pbr2/src/render/pbr.wgsl
#import bevy_pbr::mesh_view_bind_group
#import bevy_pbr::mesh_bind_group
[[block]]
struct StandardMaterial {
base_color: vec4<f32>;
emissive: vec4<f32>;
perceptual_roughness: f32;
metallic: f32;
reflectance: f32;
flags: u32;
};
/* rest of PBR fragment shader here */
```
```rust
impl Plugin for MeshRenderPlugin {
fn build(&self, app: &mut bevy_app::App) {
let mut shaders = app.world.get_resource_mut::<Assets<Shader>>().unwrap();
shaders.set_untracked(
MESH_BIND_GROUP_HANDLE,
Shader::from_wgsl(include_str!("mesh_bind_group.wgsl"))
.with_import_path("bevy_pbr::mesh_bind_group"),
);
shaders.set_untracked(
MESH_VIEW_BIND_GROUP_HANDLE,
Shader::from_wgsl(include_str!("mesh_view_bind_group.wgsl"))
.with_import_path("bevy_pbr::mesh_view_bind_group"),
);
```
By convention these should use rust-style module paths that start with the crate name. Ultimately we might enforce this convention.
Note that this feature implements _run time_ import resolution. Ultimately we should move the import logic into an asset preprocessor once Bevy gets support for that.
## Decouple Mesh Logic from PBR Logic via MeshRenderPlugin
This breaks out mesh rendering code from PBR material code, which improves the legibility of the code, decouples mesh logic from PBR logic, and opens the door for a future `MaterialPlugin<T: Material>` that handles all of the pipeline setup for arbitrary shader materials.
## Removed `RenderAsset<Shader>` in favor of extracting shaders into RenderPipelineCache
This simplifies the shader import implementation and removes the need to pass around `RenderAssets<Shader>`.
## RenderCommands are now fallible
This allows us to cleanly handle pipelines+shaders not being ready yet. We can abort a render command early in these cases, preventing bevy from trying to bind group / do draw calls for pipelines that couldn't be bound. This could also be used in the future for things like "components not existing on entities yet".
# Next Steps
* Investigate using Naga for "partial typed imports" (ex: `#import bevy_pbr::material::StandardMaterial`, which would import only the StandardMaterial struct)
* Implement `MaterialPlugin<T: Material>` for low-boilerplate custom material shaders
* Move shader import logic into the asset preprocessor once bevy gets support for that.
Fixes #3132
2021-11-18 03:45:02 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
impl MeshPipelineKey {
|
2022-08-30 03:00:39 +00:00
|
|
|
const MSAA_MASK_BITS: u32 = 0b111;
|
|
|
|
const MSAA_SHIFT_BITS: u32 = 32 - Self::MSAA_MASK_BITS.count_ones();
|
2021-12-18 20:55:40 +00:00
|
|
|
const PRIMITIVE_TOPOLOGY_MASK_BITS: u32 = 0b111;
|
2023-01-21 21:46:53 +00:00
|
|
|
const PRIMITIVE_TOPOLOGY_SHIFT_BITS: u32 =
|
|
|
|
Self::MSAA_SHIFT_BITS - Self::PRIMITIVE_TOPOLOGY_MASK_BITS.count_ones();
|
|
|
|
const BLEND_MASK_BITS: u32 = 0b11;
|
|
|
|
const BLEND_SHIFT_BITS: u32 =
|
|
|
|
Self::PRIMITIVE_TOPOLOGY_SHIFT_BITS - Self::BLEND_MASK_BITS.count_ones();
|
2023-02-19 20:38:13 +00:00
|
|
|
const TONEMAP_METHOD_MASK_BITS: u32 = 0b111;
|
|
|
|
const TONEMAP_METHOD_SHIFT_BITS: u32 =
|
|
|
|
Self::BLEND_SHIFT_BITS - Self::TONEMAP_METHOD_MASK_BITS.count_ones();
|
Shader Imports. Decouple Mesh logic from PBR (#3137)
## Shader Imports
This adds "whole file" shader imports. These come in two flavors:
### Asset Path Imports
```rust
// /assets/shaders/custom.wgsl
#import "shaders/custom_material.wgsl"
[[stage(fragment)]]
fn fragment() -> [[location(0)]] vec4<f32> {
return get_color();
}
```
```rust
// /assets/shaders/custom_material.wgsl
[[block]]
struct CustomMaterial {
color: vec4<f32>;
};
[[group(1), binding(0)]]
var<uniform> material: CustomMaterial;
```
### Custom Path Imports
Enables defining custom import paths. These are intended to be used by crates to export shader functionality:
```rust
// bevy_pbr2/src/render/pbr.wgsl
#import bevy_pbr::mesh_view_bind_group
#import bevy_pbr::mesh_bind_group
[[block]]
struct StandardMaterial {
base_color: vec4<f32>;
emissive: vec4<f32>;
perceptual_roughness: f32;
metallic: f32;
reflectance: f32;
flags: u32;
};
/* rest of PBR fragment shader here */
```
```rust
impl Plugin for MeshRenderPlugin {
fn build(&self, app: &mut bevy_app::App) {
let mut shaders = app.world.get_resource_mut::<Assets<Shader>>().unwrap();
shaders.set_untracked(
MESH_BIND_GROUP_HANDLE,
Shader::from_wgsl(include_str!("mesh_bind_group.wgsl"))
.with_import_path("bevy_pbr::mesh_bind_group"),
);
shaders.set_untracked(
MESH_VIEW_BIND_GROUP_HANDLE,
Shader::from_wgsl(include_str!("mesh_view_bind_group.wgsl"))
.with_import_path("bevy_pbr::mesh_view_bind_group"),
);
```
By convention these should use rust-style module paths that start with the crate name. Ultimately we might enforce this convention.
Note that this feature implements _run time_ import resolution. Ultimately we should move the import logic into an asset preprocessor once Bevy gets support for that.
## Decouple Mesh Logic from PBR Logic via MeshRenderPlugin
This breaks out mesh rendering code from PBR material code, which improves the legibility of the code, decouples mesh logic from PBR logic, and opens the door for a future `MaterialPlugin<T: Material>` that handles all of the pipeline setup for arbitrary shader materials.
## Removed `RenderAsset<Shader>` in favor of extracting shaders into RenderPipelineCache
This simplifies the shader import implementation and removes the need to pass around `RenderAssets<Shader>`.
## RenderCommands are now fallible
This allows us to cleanly handle pipelines+shaders not being ready yet. We can abort a render command early in these cases, preventing bevy from trying to bind group / do draw calls for pipelines that couldn't be bound. This could also be used in the future for things like "components not existing on entities yet".
# Next Steps
* Investigate using Naga for "partial typed imports" (ex: `#import bevy_pbr::material::StandardMaterial`, which would import only the StandardMaterial struct)
* Implement `MaterialPlugin<T: Material>` for low-boilerplate custom material shaders
* Move shader import logic into the asset preprocessor once bevy gets support for that.
Fixes #3132
2021-11-18 03:45:02 +00:00
|
|
|
|
|
|
|
pub fn from_msaa_samples(msaa_samples: u32) -> Self {
|
2022-08-30 03:00:39 +00:00
|
|
|
let msaa_bits =
|
|
|
|
(msaa_samples.trailing_zeros() & Self::MSAA_MASK_BITS) << Self::MSAA_SHIFT_BITS;
|
2023-06-01 08:41:42 +00:00
|
|
|
Self::from_bits_retain(msaa_bits)
|
Shader Imports. Decouple Mesh logic from PBR (#3137)
## Shader Imports
This adds "whole file" shader imports. These come in two flavors:
### Asset Path Imports
```rust
// /assets/shaders/custom.wgsl
#import "shaders/custom_material.wgsl"
[[stage(fragment)]]
fn fragment() -> [[location(0)]] vec4<f32> {
return get_color();
}
```
```rust
// /assets/shaders/custom_material.wgsl
[[block]]
struct CustomMaterial {
color: vec4<f32>;
};
[[group(1), binding(0)]]
var<uniform> material: CustomMaterial;
```
### Custom Path Imports
Enables defining custom import paths. These are intended to be used by crates to export shader functionality:
```rust
// bevy_pbr2/src/render/pbr.wgsl
#import bevy_pbr::mesh_view_bind_group
#import bevy_pbr::mesh_bind_group
[[block]]
struct StandardMaterial {
base_color: vec4<f32>;
emissive: vec4<f32>;
perceptual_roughness: f32;
metallic: f32;
reflectance: f32;
flags: u32;
};
/* rest of PBR fragment shader here */
```
```rust
impl Plugin for MeshRenderPlugin {
fn build(&self, app: &mut bevy_app::App) {
let mut shaders = app.world.get_resource_mut::<Assets<Shader>>().unwrap();
shaders.set_untracked(
MESH_BIND_GROUP_HANDLE,
Shader::from_wgsl(include_str!("mesh_bind_group.wgsl"))
.with_import_path("bevy_pbr::mesh_bind_group"),
);
shaders.set_untracked(
MESH_VIEW_BIND_GROUP_HANDLE,
Shader::from_wgsl(include_str!("mesh_view_bind_group.wgsl"))
.with_import_path("bevy_pbr::mesh_view_bind_group"),
);
```
By convention these should use rust-style module paths that start with the crate name. Ultimately we might enforce this convention.
Note that this feature implements _run time_ import resolution. Ultimately we should move the import logic into an asset preprocessor once Bevy gets support for that.
## Decouple Mesh Logic from PBR Logic via MeshRenderPlugin
This breaks out mesh rendering code from PBR material code, which improves the legibility of the code, decouples mesh logic from PBR logic, and opens the door for a future `MaterialPlugin<T: Material>` that handles all of the pipeline setup for arbitrary shader materials.
## Removed `RenderAsset<Shader>` in favor of extracting shaders into RenderPipelineCache
This simplifies the shader import implementation and removes the need to pass around `RenderAssets<Shader>`.
## RenderCommands are now fallible
This allows us to cleanly handle pipelines+shaders not being ready yet. We can abort a render command early in these cases, preventing bevy from trying to bind group / do draw calls for pipelines that couldn't be bound. This could also be used in the future for things like "components not existing on entities yet".
# Next Steps
* Investigate using Naga for "partial typed imports" (ex: `#import bevy_pbr::material::StandardMaterial`, which would import only the StandardMaterial struct)
* Implement `MaterialPlugin<T: Material>` for low-boilerplate custom material shaders
* Move shader import logic into the asset preprocessor once bevy gets support for that.
Fixes #3132
2021-11-18 03:45:02 +00:00
|
|
|
}
|
|
|
|
|
2022-10-26 20:13:59 +00:00
|
|
|
pub fn from_hdr(hdr: bool) -> Self {
|
|
|
|
if hdr {
|
|
|
|
MeshPipelineKey::HDR
|
|
|
|
} else {
|
|
|
|
MeshPipelineKey::NONE
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
Shader Imports. Decouple Mesh logic from PBR (#3137)
## Shader Imports
This adds "whole file" shader imports. These come in two flavors:
### Asset Path Imports
```rust
// /assets/shaders/custom.wgsl
#import "shaders/custom_material.wgsl"
[[stage(fragment)]]
fn fragment() -> [[location(0)]] vec4<f32> {
return get_color();
}
```
```rust
// /assets/shaders/custom_material.wgsl
[[block]]
struct CustomMaterial {
color: vec4<f32>;
};
[[group(1), binding(0)]]
var<uniform> material: CustomMaterial;
```
### Custom Path Imports
Enables defining custom import paths. These are intended to be used by crates to export shader functionality:
```rust
// bevy_pbr2/src/render/pbr.wgsl
#import bevy_pbr::mesh_view_bind_group
#import bevy_pbr::mesh_bind_group
[[block]]
struct StandardMaterial {
base_color: vec4<f32>;
emissive: vec4<f32>;
perceptual_roughness: f32;
metallic: f32;
reflectance: f32;
flags: u32;
};
/* rest of PBR fragment shader here */
```
```rust
impl Plugin for MeshRenderPlugin {
fn build(&self, app: &mut bevy_app::App) {
let mut shaders = app.world.get_resource_mut::<Assets<Shader>>().unwrap();
shaders.set_untracked(
MESH_BIND_GROUP_HANDLE,
Shader::from_wgsl(include_str!("mesh_bind_group.wgsl"))
.with_import_path("bevy_pbr::mesh_bind_group"),
);
shaders.set_untracked(
MESH_VIEW_BIND_GROUP_HANDLE,
Shader::from_wgsl(include_str!("mesh_view_bind_group.wgsl"))
.with_import_path("bevy_pbr::mesh_view_bind_group"),
);
```
By convention these should use rust-style module paths that start with the crate name. Ultimately we might enforce this convention.
Note that this feature implements _run time_ import resolution. Ultimately we should move the import logic into an asset preprocessor once Bevy gets support for that.
## Decouple Mesh Logic from PBR Logic via MeshRenderPlugin
This breaks out mesh rendering code from PBR material code, which improves the legibility of the code, decouples mesh logic from PBR logic, and opens the door for a future `MaterialPlugin<T: Material>` that handles all of the pipeline setup for arbitrary shader materials.
## Removed `RenderAsset<Shader>` in favor of extracting shaders into RenderPipelineCache
This simplifies the shader import implementation and removes the need to pass around `RenderAssets<Shader>`.
## RenderCommands are now fallible
This allows us to cleanly handle pipelines+shaders not being ready yet. We can abort a render command early in these cases, preventing bevy from trying to bind group / do draw calls for pipelines that couldn't be bound. This could also be used in the future for things like "components not existing on entities yet".
# Next Steps
* Investigate using Naga for "partial typed imports" (ex: `#import bevy_pbr::material::StandardMaterial`, which would import only the StandardMaterial struct)
* Implement `MaterialPlugin<T: Material>` for low-boilerplate custom material shaders
* Move shader import logic into the asset preprocessor once bevy gets support for that.
Fixes #3132
2021-11-18 03:45:02 +00:00
|
|
|
pub fn msaa_samples(&self) -> u32 {
|
2023-06-01 08:41:42 +00:00
|
|
|
1 << ((self.bits() >> Self::MSAA_SHIFT_BITS) & Self::MSAA_MASK_BITS)
|
Shader Imports. Decouple Mesh logic from PBR (#3137)
## Shader Imports
This adds "whole file" shader imports. These come in two flavors:
### Asset Path Imports
```rust
// /assets/shaders/custom.wgsl
#import "shaders/custom_material.wgsl"
[[stage(fragment)]]
fn fragment() -> [[location(0)]] vec4<f32> {
return get_color();
}
```
```rust
// /assets/shaders/custom_material.wgsl
[[block]]
struct CustomMaterial {
color: vec4<f32>;
};
[[group(1), binding(0)]]
var<uniform> material: CustomMaterial;
```
### Custom Path Imports
Enables defining custom import paths. These are intended to be used by crates to export shader functionality:
```rust
// bevy_pbr2/src/render/pbr.wgsl
#import bevy_pbr::mesh_view_bind_group
#import bevy_pbr::mesh_bind_group
[[block]]
struct StandardMaterial {
base_color: vec4<f32>;
emissive: vec4<f32>;
perceptual_roughness: f32;
metallic: f32;
reflectance: f32;
flags: u32;
};
/* rest of PBR fragment shader here */
```
```rust
impl Plugin for MeshRenderPlugin {
fn build(&self, app: &mut bevy_app::App) {
let mut shaders = app.world.get_resource_mut::<Assets<Shader>>().unwrap();
shaders.set_untracked(
MESH_BIND_GROUP_HANDLE,
Shader::from_wgsl(include_str!("mesh_bind_group.wgsl"))
.with_import_path("bevy_pbr::mesh_bind_group"),
);
shaders.set_untracked(
MESH_VIEW_BIND_GROUP_HANDLE,
Shader::from_wgsl(include_str!("mesh_view_bind_group.wgsl"))
.with_import_path("bevy_pbr::mesh_view_bind_group"),
);
```
By convention these should use rust-style module paths that start with the crate name. Ultimately we might enforce this convention.
Note that this feature implements _run time_ import resolution. Ultimately we should move the import logic into an asset preprocessor once Bevy gets support for that.
## Decouple Mesh Logic from PBR Logic via MeshRenderPlugin
This breaks out mesh rendering code from PBR material code, which improves the legibility of the code, decouples mesh logic from PBR logic, and opens the door for a future `MaterialPlugin<T: Material>` that handles all of the pipeline setup for arbitrary shader materials.
## Removed `RenderAsset<Shader>` in favor of extracting shaders into RenderPipelineCache
This simplifies the shader import implementation and removes the need to pass around `RenderAssets<Shader>`.
## RenderCommands are now fallible
This allows us to cleanly handle pipelines+shaders not being ready yet. We can abort a render command early in these cases, preventing bevy from trying to bind group / do draw calls for pipelines that couldn't be bound. This could also be used in the future for things like "components not existing on entities yet".
# Next Steps
* Investigate using Naga for "partial typed imports" (ex: `#import bevy_pbr::material::StandardMaterial`, which would import only the StandardMaterial struct)
* Implement `MaterialPlugin<T: Material>` for low-boilerplate custom material shaders
* Move shader import logic into the asset preprocessor once bevy gets support for that.
Fixes #3132
2021-11-18 03:45:02 +00:00
|
|
|
}
|
2021-12-18 20:55:40 +00:00
|
|
|
|
|
|
|
pub fn from_primitive_topology(primitive_topology: PrimitiveTopology) -> Self {
|
|
|
|
let primitive_topology_bits = ((primitive_topology as u32)
|
|
|
|
& Self::PRIMITIVE_TOPOLOGY_MASK_BITS)
|
|
|
|
<< Self::PRIMITIVE_TOPOLOGY_SHIFT_BITS;
|
2023-06-01 08:41:42 +00:00
|
|
|
Self::from_bits_retain(primitive_topology_bits)
|
2021-12-18 20:55:40 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
pub fn primitive_topology(&self) -> PrimitiveTopology {
|
2023-06-01 08:41:42 +00:00
|
|
|
let primitive_topology_bits = (self.bits() >> Self::PRIMITIVE_TOPOLOGY_SHIFT_BITS)
|
|
|
|
& Self::PRIMITIVE_TOPOLOGY_MASK_BITS;
|
2021-12-18 20:55:40 +00:00
|
|
|
match primitive_topology_bits {
|
|
|
|
x if x == PrimitiveTopology::PointList as u32 => PrimitiveTopology::PointList,
|
|
|
|
x if x == PrimitiveTopology::LineList as u32 => PrimitiveTopology::LineList,
|
|
|
|
x if x == PrimitiveTopology::LineStrip as u32 => PrimitiveTopology::LineStrip,
|
|
|
|
x if x == PrimitiveTopology::TriangleList as u32 => PrimitiveTopology::TriangleList,
|
|
|
|
x if x == PrimitiveTopology::TriangleStrip as u32 => PrimitiveTopology::TriangleStrip,
|
|
|
|
_ => PrimitiveTopology::default(),
|
|
|
|
}
|
|
|
|
}
|
Shader Imports. Decouple Mesh logic from PBR (#3137)
## Shader Imports
This adds "whole file" shader imports. These come in two flavors:
### Asset Path Imports
```rust
// /assets/shaders/custom.wgsl
#import "shaders/custom_material.wgsl"
[[stage(fragment)]]
fn fragment() -> [[location(0)]] vec4<f32> {
return get_color();
}
```
```rust
// /assets/shaders/custom_material.wgsl
[[block]]
struct CustomMaterial {
color: vec4<f32>;
};
[[group(1), binding(0)]]
var<uniform> material: CustomMaterial;
```
### Custom Path Imports
Enables defining custom import paths. These are intended to be used by crates to export shader functionality:
```rust
// bevy_pbr2/src/render/pbr.wgsl
#import bevy_pbr::mesh_view_bind_group
#import bevy_pbr::mesh_bind_group
[[block]]
struct StandardMaterial {
base_color: vec4<f32>;
emissive: vec4<f32>;
perceptual_roughness: f32;
metallic: f32;
reflectance: f32;
flags: u32;
};
/* rest of PBR fragment shader here */
```
```rust
impl Plugin for MeshRenderPlugin {
fn build(&self, app: &mut bevy_app::App) {
let mut shaders = app.world.get_resource_mut::<Assets<Shader>>().unwrap();
shaders.set_untracked(
MESH_BIND_GROUP_HANDLE,
Shader::from_wgsl(include_str!("mesh_bind_group.wgsl"))
.with_import_path("bevy_pbr::mesh_bind_group"),
);
shaders.set_untracked(
MESH_VIEW_BIND_GROUP_HANDLE,
Shader::from_wgsl(include_str!("mesh_view_bind_group.wgsl"))
.with_import_path("bevy_pbr::mesh_view_bind_group"),
);
```
By convention these should use rust-style module paths that start with the crate name. Ultimately we might enforce this convention.
Note that this feature implements _run time_ import resolution. Ultimately we should move the import logic into an asset preprocessor once Bevy gets support for that.
## Decouple Mesh Logic from PBR Logic via MeshRenderPlugin
This breaks out mesh rendering code from PBR material code, which improves the legibility of the code, decouples mesh logic from PBR logic, and opens the door for a future `MaterialPlugin<T: Material>` that handles all of the pipeline setup for arbitrary shader materials.
## Removed `RenderAsset<Shader>` in favor of extracting shaders into RenderPipelineCache
This simplifies the shader import implementation and removes the need to pass around `RenderAssets<Shader>`.
## RenderCommands are now fallible
This allows us to cleanly handle pipelines+shaders not being ready yet. We can abort a render command early in these cases, preventing bevy from trying to bind group / do draw calls for pipelines that couldn't be bound. This could also be used in the future for things like "components not existing on entities yet".
# Next Steps
* Investigate using Naga for "partial typed imports" (ex: `#import bevy_pbr::material::StandardMaterial`, which would import only the StandardMaterial struct)
* Implement `MaterialPlugin<T: Material>` for low-boilerplate custom material shaders
* Move shader import logic into the asset preprocessor once bevy gets support for that.
Fixes #3132
2021-11-18 03:45:02 +00:00
|
|
|
}
|
|
|
|
|
Add morph targets (#8158)
# Objective
- Add morph targets to `bevy_pbr` (closes #5756) & load them from glTF
- Supersedes #3722
- Fixes #6814
[Morph targets][1] (also known as shape interpolation, shape keys, or
blend shapes) allow animating individual vertices with fine grained
controls. This is typically used for facial expressions. By specifying
multiple poses as vertex offset, and providing a set of weight of each
pose, it is possible to define surprisingly realistic transitions
between poses. Blending between multiple poses also allow composition.
Morph targets are part of the [gltf standard][2] and are a feature of
Unity and Unreal, and babylone.js, it is only natural to implement them
in bevy.
## Solution
This implementation of morph targets uses a 3d texture where each pixel
is a component of an animated attribute. Each layer is a different
target. We use a 2d texture for each target, because the number of
attribute×components×animated vertices is expected to always exceed the
maximum pixel row size limit of webGL2. It copies fairly closely the way
skinning is implemented on the CPU side, while on the GPU side, the
shader morph target implementation is a relatively trivial detail.
We add an optional `morph_texture` to the `Mesh` struct. The
`morph_texture` is built through a method that accepts an iterator over
attribute buffers.
The `MorphWeights` component, user-accessible, controls the blend of
poses used by mesh instances (so that multiple copy of the same mesh may
have different weights), all the weights are uploaded to a uniform
buffer of 256 `f32`. We limit to 16 poses per mesh, and a total of 256
poses.
More literature:
* Old babylone.js implementation (vertex attribute-based):
https://www.eternalcoding.com/dev-log-1-morph-targets/
* Babylone.js implementation (similar to ours):
https://www.youtube.com/watch?v=LBPRmGgU0PE
* GPU gems 3:
https://developer.nvidia.com/gpugems/gpugems3/part-i-geometry/chapter-3-directx-10-blend-shapes-breaking-limits
* Development discord thread
https://discord.com/channels/691052431525675048/1083325980615114772
https://user-images.githubusercontent.com/26321040/231181046-3bca2ab2-d4d9-472e-8098-639f1871ce2e.mp4
https://github.com/bevyengine/bevy/assets/26321040/d2a0c544-0ef8-45cf-9f99-8c3792f5a258
## Acknowledgements
* Thanks to `storytold` for sponsoring the feature
* Thanks to `superdump` and `james7132` for guidance and help figuring
out stuff
## Future work
- Handling of less and more attributes (eg: animated uv, animated
arbitrary attributes)
- Dynamic pose allocation (so that zero-weighted poses aren't uploaded
to GPU for example, enables much more total poses)
- Better animation API, see #8357
----
## Changelog
- Add morph targets to bevy meshes
- Support up to 64 poses per mesh of individually up to 116508 vertices,
animation currently strictly limited to the position, normal and tangent
attributes.
- Load a morph target using `Mesh::set_morph_targets`
- Add `VisitMorphTargets` and `VisitMorphAttributes` traits to
`bevy_render`, this allows defining morph targets (a fairly complex and
nested data structure) through iterators (ie: single copy instead of
passing around buffers), see documentation of those traits for details
- Add `MorphWeights` component exported by `bevy_render`
- `MorphWeights` control mesh's morph target weights, blending between
various poses defined as morph targets.
- `MorphWeights` are directly inherited by direct children (single level
of hierarchy) of an entity. This allows controlling several mesh
primitives through a unique entity _as per GLTF spec_.
- Add `MorphTargetNames` component, naming each indices of loaded morph
targets.
- Load morph targets weights and buffers in `bevy_gltf`
- handle morph targets animations in `bevy_animation` (previously, it
was a `warn!` log)
- Add the `MorphStressTest.gltf` asset for morph targets testing, taken
from the glTF samples repo, CC0.
- Add morph target manipulation to `scene_viewer`
- Separate the animation code in `scene_viewer` from the rest of the
code, reducing `#[cfg(feature)]` noise
- Add the `morph_targets.rs` example to show off how to manipulate morph
targets, loading `MorpStressTest.gltf`
## Migration Guide
- (very specialized, unlikely to be touched by 3rd parties)
- `MeshPipeline` now has a single `mesh_layouts` field rather than
separate `mesh_layout` and `skinned_mesh_layout` fields. You should
handle all possible mesh bind group layouts in your implementation
- You should also handle properly the new `MORPH_TARGETS` shader def and
mesh pipeline key. A new function is exposed to make this easier:
`setup_moprh_and_skinning_defs`
- The `MeshBindGroup` is now `MeshBindGroups`, cached bind groups are
now accessed through the `get` method.
[1]: https://en.wikipedia.org/wiki/Morph_target_animation
[2]:
https://registry.khronos.org/glTF/specs/2.0/glTF-2.0.html#morph-targets
---------
Co-authored-by: François <mockersf@gmail.com>
Co-authored-by: Carter Anderson <mcanders1@gmail.com>
2023-06-22 20:00:01 +00:00
|
|
|
fn is_skinned(layout: &Hashed<InnerMeshVertexBufferLayout>) -> bool {
|
|
|
|
layout.contains(Mesh::ATTRIBUTE_JOINT_INDEX) && layout.contains(Mesh::ATTRIBUTE_JOINT_WEIGHT)
|
|
|
|
}
|
|
|
|
pub fn setup_morph_and_skinning_defs(
|
|
|
|
mesh_layouts: &MeshLayouts,
|
|
|
|
layout: &Hashed<InnerMeshVertexBufferLayout>,
|
|
|
|
offset: u32,
|
|
|
|
key: &MeshPipelineKey,
|
|
|
|
shader_defs: &mut Vec<ShaderDefVal>,
|
|
|
|
vertex_attributes: &mut Vec<VertexAttributeDescriptor>,
|
|
|
|
) -> BindGroupLayout {
|
|
|
|
let mut add_skin_data = || {
|
|
|
|
shader_defs.push("SKINNED".into());
|
|
|
|
vertex_attributes.push(Mesh::ATTRIBUTE_JOINT_INDEX.at_shader_location(offset));
|
|
|
|
vertex_attributes.push(Mesh::ATTRIBUTE_JOINT_WEIGHT.at_shader_location(offset + 1));
|
|
|
|
};
|
|
|
|
let is_morphed = key.intersects(MeshPipelineKey::MORPH_TARGETS);
|
|
|
|
match (is_skinned(layout), is_morphed) {
|
|
|
|
(true, false) => {
|
|
|
|
add_skin_data();
|
|
|
|
mesh_layouts.skinned.clone()
|
|
|
|
}
|
|
|
|
(true, true) => {
|
|
|
|
add_skin_data();
|
|
|
|
shader_defs.push("MORPH_TARGETS".into());
|
|
|
|
mesh_layouts.morphed_skinned.clone()
|
|
|
|
}
|
|
|
|
(false, true) => {
|
|
|
|
shader_defs.push("MORPH_TARGETS".into());
|
|
|
|
mesh_layouts.morphed.clone()
|
|
|
|
}
|
|
|
|
(false, false) => mesh_layouts.model_only.clone(),
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
Mesh vertex buffer layouts (#3959)
This PR makes a number of changes to how meshes and vertex attributes are handled, which the goal of enabling easy and flexible custom vertex attributes:
* Reworks the `Mesh` type to use the newly added `VertexAttribute` internally
* `VertexAttribute` defines the name, a unique `VertexAttributeId`, and a `VertexFormat`
* `VertexAttributeId` is used to produce consistent sort orders for vertex buffer generation, replacing the more expensive and often surprising "name based sorting"
* Meshes can be used to generate a `MeshVertexBufferLayout`, which defines the layout of the gpu buffer produced by the mesh. `MeshVertexBufferLayouts` can then be used to generate actual `VertexBufferLayouts` according to the requirements of a specific pipeline. This decoupling of "mesh layout" vs "pipeline vertex buffer layout" is what enables custom attributes. We don't need to standardize _mesh layouts_ or contort meshes to meet the needs of a specific pipeline. As long as the mesh has what the pipeline needs, it will work transparently.
* Mesh-based pipelines now specialize on `&MeshVertexBufferLayout` via the new `SpecializedMeshPipeline` trait (which behaves like `SpecializedPipeline`, but adds `&MeshVertexBufferLayout`). The integrity of the pipeline cache is maintained because the `MeshVertexBufferLayout` is treated as part of the key (which is fully abstracted from implementers of the trait ... no need to add any additional info to the specialization key).
* Hashing `MeshVertexBufferLayout` is too expensive to do for every entity, every frame. To make this scalable, I added a generalized "pre-hashing" solution to `bevy_utils`: `Hashed<T>` keys and `PreHashMap<K, V>` (which uses `Hashed<T>` internally) . Why didn't I just do the quick and dirty in-place "pre-compute hash and use that u64 as a key in a hashmap" that we've done in the past? Because its wrong! Hashes by themselves aren't enough because two different values can produce the same hash. Re-hashing a hash is even worse! I decided to build a generalized solution because this pattern has come up in the past and we've chosen to do the wrong thing. Now we can do the right thing! This did unfortunately require pulling in `hashbrown` and using that in `bevy_utils`, because avoiding re-hashes requires the `raw_entry_mut` api, which isn't stabilized yet (and may never be ... `entry_ref` has favor now, but also isn't available yet). If std's HashMap ever provides the tools we need, we can move back to that. Note that adding `hashbrown` doesn't increase our dependency count because it was already in our tree. I will probably break these changes out into their own PR.
* Specializing on `MeshVertexBufferLayout` has one non-obvious behavior: it can produce identical pipelines for two different MeshVertexBufferLayouts. To optimize the number of active pipelines / reduce re-binds while drawing, I de-duplicate pipelines post-specialization using the final `VertexBufferLayout` as the key. For example, consider a pipeline that needs the layout `(position, normal)` and is specialized using two meshes: `(position, normal, uv)` and `(position, normal, other_vec2)`. If both of these meshes result in `(position, normal)` specializations, we can use the same pipeline! Now we do. Cool!
To briefly illustrate, this is what the relevant section of `MeshPipeline`'s specialization code looks like now:
```rust
impl SpecializedMeshPipeline for MeshPipeline {
type Key = MeshPipelineKey;
fn specialize(
&self,
key: Self::Key,
layout: &MeshVertexBufferLayout,
) -> RenderPipelineDescriptor {
let mut vertex_attributes = vec![
Mesh::ATTRIBUTE_POSITION.at_shader_location(0),
Mesh::ATTRIBUTE_NORMAL.at_shader_location(1),
Mesh::ATTRIBUTE_UV_0.at_shader_location(2),
];
let mut shader_defs = Vec::new();
if layout.contains(Mesh::ATTRIBUTE_TANGENT) {
shader_defs.push(String::from("VERTEX_TANGENTS"));
vertex_attributes.push(Mesh::ATTRIBUTE_TANGENT.at_shader_location(3));
}
let vertex_buffer_layout = layout
.get_layout(&vertex_attributes)
.expect("Mesh is missing a vertex attribute");
```
Notice that this is _much_ simpler than it was before. And now any mesh with any layout can be used with this pipeline, provided it has vertex postions, normals, and uvs. We even got to remove `HAS_TANGENTS` from MeshPipelineKey and `has_tangents` from `GpuMesh`, because that information is redundant with `MeshVertexBufferLayout`.
This is still a draft because I still need to:
* Add more docs
* Experiment with adding error handling to mesh pipeline specialization (which would print errors at runtime when a mesh is missing a vertex attribute required by a pipeline). If it doesn't tank perf, we'll keep it.
* Consider breaking out the PreHash / hashbrown changes into a separate PR.
* Add an example illustrating this change
* Verify that the "mesh-specialized pipeline de-duplication code" works properly
Please dont yell at me for not doing these things yet :) Just trying to get this in peoples' hands asap.
Alternative to #3120
Fixes #3030
Co-authored-by: Carter Anderson <mcanders1@gmail.com>
2022-02-23 23:21:13 +00:00
|
|
|
impl SpecializedMeshPipeline for MeshPipeline {
|
Shader Imports. Decouple Mesh logic from PBR (#3137)
## Shader Imports
This adds "whole file" shader imports. These come in two flavors:
### Asset Path Imports
```rust
// /assets/shaders/custom.wgsl
#import "shaders/custom_material.wgsl"
[[stage(fragment)]]
fn fragment() -> [[location(0)]] vec4<f32> {
return get_color();
}
```
```rust
// /assets/shaders/custom_material.wgsl
[[block]]
struct CustomMaterial {
color: vec4<f32>;
};
[[group(1), binding(0)]]
var<uniform> material: CustomMaterial;
```
### Custom Path Imports
Enables defining custom import paths. These are intended to be used by crates to export shader functionality:
```rust
// bevy_pbr2/src/render/pbr.wgsl
#import bevy_pbr::mesh_view_bind_group
#import bevy_pbr::mesh_bind_group
[[block]]
struct StandardMaterial {
base_color: vec4<f32>;
emissive: vec4<f32>;
perceptual_roughness: f32;
metallic: f32;
reflectance: f32;
flags: u32;
};
/* rest of PBR fragment shader here */
```
```rust
impl Plugin for MeshRenderPlugin {
fn build(&self, app: &mut bevy_app::App) {
let mut shaders = app.world.get_resource_mut::<Assets<Shader>>().unwrap();
shaders.set_untracked(
MESH_BIND_GROUP_HANDLE,
Shader::from_wgsl(include_str!("mesh_bind_group.wgsl"))
.with_import_path("bevy_pbr::mesh_bind_group"),
);
shaders.set_untracked(
MESH_VIEW_BIND_GROUP_HANDLE,
Shader::from_wgsl(include_str!("mesh_view_bind_group.wgsl"))
.with_import_path("bevy_pbr::mesh_view_bind_group"),
);
```
By convention these should use rust-style module paths that start with the crate name. Ultimately we might enforce this convention.
Note that this feature implements _run time_ import resolution. Ultimately we should move the import logic into an asset preprocessor once Bevy gets support for that.
## Decouple Mesh Logic from PBR Logic via MeshRenderPlugin
This breaks out mesh rendering code from PBR material code, which improves the legibility of the code, decouples mesh logic from PBR logic, and opens the door for a future `MaterialPlugin<T: Material>` that handles all of the pipeline setup for arbitrary shader materials.
## Removed `RenderAsset<Shader>` in favor of extracting shaders into RenderPipelineCache
This simplifies the shader import implementation and removes the need to pass around `RenderAssets<Shader>`.
## RenderCommands are now fallible
This allows us to cleanly handle pipelines+shaders not being ready yet. We can abort a render command early in these cases, preventing bevy from trying to bind group / do draw calls for pipelines that couldn't be bound. This could also be used in the future for things like "components not existing on entities yet".
# Next Steps
* Investigate using Naga for "partial typed imports" (ex: `#import bevy_pbr::material::StandardMaterial`, which would import only the StandardMaterial struct)
* Implement `MaterialPlugin<T: Material>` for low-boilerplate custom material shaders
* Move shader import logic into the asset preprocessor once bevy gets support for that.
Fixes #3132
2021-11-18 03:45:02 +00:00
|
|
|
type Key = MeshPipelineKey;
|
|
|
|
|
Mesh vertex buffer layouts (#3959)
This PR makes a number of changes to how meshes and vertex attributes are handled, which the goal of enabling easy and flexible custom vertex attributes:
* Reworks the `Mesh` type to use the newly added `VertexAttribute` internally
* `VertexAttribute` defines the name, a unique `VertexAttributeId`, and a `VertexFormat`
* `VertexAttributeId` is used to produce consistent sort orders for vertex buffer generation, replacing the more expensive and often surprising "name based sorting"
* Meshes can be used to generate a `MeshVertexBufferLayout`, which defines the layout of the gpu buffer produced by the mesh. `MeshVertexBufferLayouts` can then be used to generate actual `VertexBufferLayouts` according to the requirements of a specific pipeline. This decoupling of "mesh layout" vs "pipeline vertex buffer layout" is what enables custom attributes. We don't need to standardize _mesh layouts_ or contort meshes to meet the needs of a specific pipeline. As long as the mesh has what the pipeline needs, it will work transparently.
* Mesh-based pipelines now specialize on `&MeshVertexBufferLayout` via the new `SpecializedMeshPipeline` trait (which behaves like `SpecializedPipeline`, but adds `&MeshVertexBufferLayout`). The integrity of the pipeline cache is maintained because the `MeshVertexBufferLayout` is treated as part of the key (which is fully abstracted from implementers of the trait ... no need to add any additional info to the specialization key).
* Hashing `MeshVertexBufferLayout` is too expensive to do for every entity, every frame. To make this scalable, I added a generalized "pre-hashing" solution to `bevy_utils`: `Hashed<T>` keys and `PreHashMap<K, V>` (which uses `Hashed<T>` internally) . Why didn't I just do the quick and dirty in-place "pre-compute hash and use that u64 as a key in a hashmap" that we've done in the past? Because its wrong! Hashes by themselves aren't enough because two different values can produce the same hash. Re-hashing a hash is even worse! I decided to build a generalized solution because this pattern has come up in the past and we've chosen to do the wrong thing. Now we can do the right thing! This did unfortunately require pulling in `hashbrown` and using that in `bevy_utils`, because avoiding re-hashes requires the `raw_entry_mut` api, which isn't stabilized yet (and may never be ... `entry_ref` has favor now, but also isn't available yet). If std's HashMap ever provides the tools we need, we can move back to that. Note that adding `hashbrown` doesn't increase our dependency count because it was already in our tree. I will probably break these changes out into their own PR.
* Specializing on `MeshVertexBufferLayout` has one non-obvious behavior: it can produce identical pipelines for two different MeshVertexBufferLayouts. To optimize the number of active pipelines / reduce re-binds while drawing, I de-duplicate pipelines post-specialization using the final `VertexBufferLayout` as the key. For example, consider a pipeline that needs the layout `(position, normal)` and is specialized using two meshes: `(position, normal, uv)` and `(position, normal, other_vec2)`. If both of these meshes result in `(position, normal)` specializations, we can use the same pipeline! Now we do. Cool!
To briefly illustrate, this is what the relevant section of `MeshPipeline`'s specialization code looks like now:
```rust
impl SpecializedMeshPipeline for MeshPipeline {
type Key = MeshPipelineKey;
fn specialize(
&self,
key: Self::Key,
layout: &MeshVertexBufferLayout,
) -> RenderPipelineDescriptor {
let mut vertex_attributes = vec![
Mesh::ATTRIBUTE_POSITION.at_shader_location(0),
Mesh::ATTRIBUTE_NORMAL.at_shader_location(1),
Mesh::ATTRIBUTE_UV_0.at_shader_location(2),
];
let mut shader_defs = Vec::new();
if layout.contains(Mesh::ATTRIBUTE_TANGENT) {
shader_defs.push(String::from("VERTEX_TANGENTS"));
vertex_attributes.push(Mesh::ATTRIBUTE_TANGENT.at_shader_location(3));
}
let vertex_buffer_layout = layout
.get_layout(&vertex_attributes)
.expect("Mesh is missing a vertex attribute");
```
Notice that this is _much_ simpler than it was before. And now any mesh with any layout can be used with this pipeline, provided it has vertex postions, normals, and uvs. We even got to remove `HAS_TANGENTS` from MeshPipelineKey and `has_tangents` from `GpuMesh`, because that information is redundant with `MeshVertexBufferLayout`.
This is still a draft because I still need to:
* Add more docs
* Experiment with adding error handling to mesh pipeline specialization (which would print errors at runtime when a mesh is missing a vertex attribute required by a pipeline). If it doesn't tank perf, we'll keep it.
* Consider breaking out the PreHash / hashbrown changes into a separate PR.
* Add an example illustrating this change
* Verify that the "mesh-specialized pipeline de-duplication code" works properly
Please dont yell at me for not doing these things yet :) Just trying to get this in peoples' hands asap.
Alternative to #3120
Fixes #3030
Co-authored-by: Carter Anderson <mcanders1@gmail.com>
2022-02-23 23:21:13 +00:00
|
|
|
fn specialize(
|
|
|
|
&self,
|
|
|
|
key: Self::Key,
|
|
|
|
layout: &MeshVertexBufferLayout,
|
|
|
|
) -> Result<RenderPipelineDescriptor, SpecializedMeshPipelineError> {
|
Shader Imports. Decouple Mesh logic from PBR (#3137)
## Shader Imports
This adds "whole file" shader imports. These come in two flavors:
### Asset Path Imports
```rust
// /assets/shaders/custom.wgsl
#import "shaders/custom_material.wgsl"
[[stage(fragment)]]
fn fragment() -> [[location(0)]] vec4<f32> {
return get_color();
}
```
```rust
// /assets/shaders/custom_material.wgsl
[[block]]
struct CustomMaterial {
color: vec4<f32>;
};
[[group(1), binding(0)]]
var<uniform> material: CustomMaterial;
```
### Custom Path Imports
Enables defining custom import paths. These are intended to be used by crates to export shader functionality:
```rust
// bevy_pbr2/src/render/pbr.wgsl
#import bevy_pbr::mesh_view_bind_group
#import bevy_pbr::mesh_bind_group
[[block]]
struct StandardMaterial {
base_color: vec4<f32>;
emissive: vec4<f32>;
perceptual_roughness: f32;
metallic: f32;
reflectance: f32;
flags: u32;
};
/* rest of PBR fragment shader here */
```
```rust
impl Plugin for MeshRenderPlugin {
fn build(&self, app: &mut bevy_app::App) {
let mut shaders = app.world.get_resource_mut::<Assets<Shader>>().unwrap();
shaders.set_untracked(
MESH_BIND_GROUP_HANDLE,
Shader::from_wgsl(include_str!("mesh_bind_group.wgsl"))
.with_import_path("bevy_pbr::mesh_bind_group"),
);
shaders.set_untracked(
MESH_VIEW_BIND_GROUP_HANDLE,
Shader::from_wgsl(include_str!("mesh_view_bind_group.wgsl"))
.with_import_path("bevy_pbr::mesh_view_bind_group"),
);
```
By convention these should use rust-style module paths that start with the crate name. Ultimately we might enforce this convention.
Note that this feature implements _run time_ import resolution. Ultimately we should move the import logic into an asset preprocessor once Bevy gets support for that.
## Decouple Mesh Logic from PBR Logic via MeshRenderPlugin
This breaks out mesh rendering code from PBR material code, which improves the legibility of the code, decouples mesh logic from PBR logic, and opens the door for a future `MaterialPlugin<T: Material>` that handles all of the pipeline setup for arbitrary shader materials.
## Removed `RenderAsset<Shader>` in favor of extracting shaders into RenderPipelineCache
This simplifies the shader import implementation and removes the need to pass around `RenderAssets<Shader>`.
## RenderCommands are now fallible
This allows us to cleanly handle pipelines+shaders not being ready yet. We can abort a render command early in these cases, preventing bevy from trying to bind group / do draw calls for pipelines that couldn't be bound. This could also be used in the future for things like "components not existing on entities yet".
# Next Steps
* Investigate using Naga for "partial typed imports" (ex: `#import bevy_pbr::material::StandardMaterial`, which would import only the StandardMaterial struct)
* Implement `MaterialPlugin<T: Material>` for low-boilerplate custom material shaders
* Move shader import logic into the asset preprocessor once bevy gets support for that.
Fixes #3132
2021-11-18 03:45:02 +00:00
|
|
|
let mut shader_defs = Vec::new();
|
2022-10-10 17:58:15 +00:00
|
|
|
let mut vertex_attributes = Vec::new();
|
|
|
|
|
Use GpuArrayBuffer for MeshUniform (#9254)
# Objective
- Reduce the number of rebindings to enable batching of draw commands
## Solution
- Use the new `GpuArrayBuffer` for `MeshUniform` data to store all
`MeshUniform` data in arrays within fewer bindings
- Sort opaque/alpha mask prepass, opaque/alpha mask main, and shadow
phases also by the batch per-object data binding dynamic offset to
improve performance on WebGL2.
---
## Changelog
- Changed: Per-object `MeshUniform` data is now managed by
`GpuArrayBuffer` as arrays in buffers that need to be indexed into.
## Migration Guide
Accessing the `model` member of an individual mesh object's shader
`Mesh` struct the old way where each `MeshUniform` was stored at its own
dynamic offset:
```rust
struct Vertex {
@location(0) position: vec3<f32>,
};
fn vertex(vertex: Vertex) -> VertexOutput {
var out: VertexOutput;
out.clip_position = mesh_position_local_to_clip(
mesh.model,
vec4<f32>(vertex.position, 1.0)
);
return out;
}
```
The new way where one needs to index into the array of `Mesh`es for the
batch:
```rust
struct Vertex {
@builtin(instance_index) instance_index: u32,
@location(0) position: vec3<f32>,
};
fn vertex(vertex: Vertex) -> VertexOutput {
var out: VertexOutput;
out.clip_position = mesh_position_local_to_clip(
mesh[vertex.instance_index].model,
vec4<f32>(vertex.position, 1.0)
);
return out;
}
```
Note that using the instance_index is the default way to pass the
per-object index into the shader, but if you wish to do custom rendering
approaches you can pass it in however you like.
---------
Co-authored-by: robtfm <50659922+robtfm@users.noreply.github.com>
Co-authored-by: Elabajaba <Elabajaba@users.noreply.github.com>
2023-07-30 13:17:08 +00:00
|
|
|
shader_defs.push("VERTEX_OUTPUT_INSTANCE_INDEX".into());
|
|
|
|
|
2022-10-10 17:58:15 +00:00
|
|
|
if layout.contains(Mesh::ATTRIBUTE_POSITION) {
|
2022-11-21 22:38:29 +00:00
|
|
|
shader_defs.push("VERTEX_POSITIONS".into());
|
2022-10-10 17:58:15 +00:00
|
|
|
vertex_attributes.push(Mesh::ATTRIBUTE_POSITION.at_shader_location(0));
|
|
|
|
}
|
|
|
|
|
|
|
|
if layout.contains(Mesh::ATTRIBUTE_NORMAL) {
|
2022-11-21 22:38:29 +00:00
|
|
|
shader_defs.push("VERTEX_NORMALS".into());
|
2022-10-10 17:58:15 +00:00
|
|
|
vertex_attributes.push(Mesh::ATTRIBUTE_NORMAL.at_shader_location(1));
|
|
|
|
}
|
|
|
|
|
2022-07-08 20:55:08 +00:00
|
|
|
if layout.contains(Mesh::ATTRIBUTE_UV_0) {
|
2022-11-21 22:38:29 +00:00
|
|
|
shader_defs.push("VERTEX_UVS".into());
|
2022-07-08 20:55:08 +00:00
|
|
|
vertex_attributes.push(Mesh::ATTRIBUTE_UV_0.at_shader_location(2));
|
|
|
|
}
|
|
|
|
|
Mesh vertex buffer layouts (#3959)
This PR makes a number of changes to how meshes and vertex attributes are handled, which the goal of enabling easy and flexible custom vertex attributes:
* Reworks the `Mesh` type to use the newly added `VertexAttribute` internally
* `VertexAttribute` defines the name, a unique `VertexAttributeId`, and a `VertexFormat`
* `VertexAttributeId` is used to produce consistent sort orders for vertex buffer generation, replacing the more expensive and often surprising "name based sorting"
* Meshes can be used to generate a `MeshVertexBufferLayout`, which defines the layout of the gpu buffer produced by the mesh. `MeshVertexBufferLayouts` can then be used to generate actual `VertexBufferLayouts` according to the requirements of a specific pipeline. This decoupling of "mesh layout" vs "pipeline vertex buffer layout" is what enables custom attributes. We don't need to standardize _mesh layouts_ or contort meshes to meet the needs of a specific pipeline. As long as the mesh has what the pipeline needs, it will work transparently.
* Mesh-based pipelines now specialize on `&MeshVertexBufferLayout` via the new `SpecializedMeshPipeline` trait (which behaves like `SpecializedPipeline`, but adds `&MeshVertexBufferLayout`). The integrity of the pipeline cache is maintained because the `MeshVertexBufferLayout` is treated as part of the key (which is fully abstracted from implementers of the trait ... no need to add any additional info to the specialization key).
* Hashing `MeshVertexBufferLayout` is too expensive to do for every entity, every frame. To make this scalable, I added a generalized "pre-hashing" solution to `bevy_utils`: `Hashed<T>` keys and `PreHashMap<K, V>` (which uses `Hashed<T>` internally) . Why didn't I just do the quick and dirty in-place "pre-compute hash and use that u64 as a key in a hashmap" that we've done in the past? Because its wrong! Hashes by themselves aren't enough because two different values can produce the same hash. Re-hashing a hash is even worse! I decided to build a generalized solution because this pattern has come up in the past and we've chosen to do the wrong thing. Now we can do the right thing! This did unfortunately require pulling in `hashbrown` and using that in `bevy_utils`, because avoiding re-hashes requires the `raw_entry_mut` api, which isn't stabilized yet (and may never be ... `entry_ref` has favor now, but also isn't available yet). If std's HashMap ever provides the tools we need, we can move back to that. Note that adding `hashbrown` doesn't increase our dependency count because it was already in our tree. I will probably break these changes out into their own PR.
* Specializing on `MeshVertexBufferLayout` has one non-obvious behavior: it can produce identical pipelines for two different MeshVertexBufferLayouts. To optimize the number of active pipelines / reduce re-binds while drawing, I de-duplicate pipelines post-specialization using the final `VertexBufferLayout` as the key. For example, consider a pipeline that needs the layout `(position, normal)` and is specialized using two meshes: `(position, normal, uv)` and `(position, normal, other_vec2)`. If both of these meshes result in `(position, normal)` specializations, we can use the same pipeline! Now we do. Cool!
To briefly illustrate, this is what the relevant section of `MeshPipeline`'s specialization code looks like now:
```rust
impl SpecializedMeshPipeline for MeshPipeline {
type Key = MeshPipelineKey;
fn specialize(
&self,
key: Self::Key,
layout: &MeshVertexBufferLayout,
) -> RenderPipelineDescriptor {
let mut vertex_attributes = vec![
Mesh::ATTRIBUTE_POSITION.at_shader_location(0),
Mesh::ATTRIBUTE_NORMAL.at_shader_location(1),
Mesh::ATTRIBUTE_UV_0.at_shader_location(2),
];
let mut shader_defs = Vec::new();
if layout.contains(Mesh::ATTRIBUTE_TANGENT) {
shader_defs.push(String::from("VERTEX_TANGENTS"));
vertex_attributes.push(Mesh::ATTRIBUTE_TANGENT.at_shader_location(3));
}
let vertex_buffer_layout = layout
.get_layout(&vertex_attributes)
.expect("Mesh is missing a vertex attribute");
```
Notice that this is _much_ simpler than it was before. And now any mesh with any layout can be used with this pipeline, provided it has vertex postions, normals, and uvs. We even got to remove `HAS_TANGENTS` from MeshPipelineKey and `has_tangents` from `GpuMesh`, because that information is redundant with `MeshVertexBufferLayout`.
This is still a draft because I still need to:
* Add more docs
* Experiment with adding error handling to mesh pipeline specialization (which would print errors at runtime when a mesh is missing a vertex attribute required by a pipeline). If it doesn't tank perf, we'll keep it.
* Consider breaking out the PreHash / hashbrown changes into a separate PR.
* Add an example illustrating this change
* Verify that the "mesh-specialized pipeline de-duplication code" works properly
Please dont yell at me for not doing these things yet :) Just trying to get this in peoples' hands asap.
Alternative to #3120
Fixes #3030
Co-authored-by: Carter Anderson <mcanders1@gmail.com>
2022-02-23 23:21:13 +00:00
|
|
|
if layout.contains(Mesh::ATTRIBUTE_TANGENT) {
|
2022-11-21 22:38:29 +00:00
|
|
|
shader_defs.push("VERTEX_TANGENTS".into());
|
Mesh vertex buffer layouts (#3959)
This PR makes a number of changes to how meshes and vertex attributes are handled, which the goal of enabling easy and flexible custom vertex attributes:
* Reworks the `Mesh` type to use the newly added `VertexAttribute` internally
* `VertexAttribute` defines the name, a unique `VertexAttributeId`, and a `VertexFormat`
* `VertexAttributeId` is used to produce consistent sort orders for vertex buffer generation, replacing the more expensive and often surprising "name based sorting"
* Meshes can be used to generate a `MeshVertexBufferLayout`, which defines the layout of the gpu buffer produced by the mesh. `MeshVertexBufferLayouts` can then be used to generate actual `VertexBufferLayouts` according to the requirements of a specific pipeline. This decoupling of "mesh layout" vs "pipeline vertex buffer layout" is what enables custom attributes. We don't need to standardize _mesh layouts_ or contort meshes to meet the needs of a specific pipeline. As long as the mesh has what the pipeline needs, it will work transparently.
* Mesh-based pipelines now specialize on `&MeshVertexBufferLayout` via the new `SpecializedMeshPipeline` trait (which behaves like `SpecializedPipeline`, but adds `&MeshVertexBufferLayout`). The integrity of the pipeline cache is maintained because the `MeshVertexBufferLayout` is treated as part of the key (which is fully abstracted from implementers of the trait ... no need to add any additional info to the specialization key).
* Hashing `MeshVertexBufferLayout` is too expensive to do for every entity, every frame. To make this scalable, I added a generalized "pre-hashing" solution to `bevy_utils`: `Hashed<T>` keys and `PreHashMap<K, V>` (which uses `Hashed<T>` internally) . Why didn't I just do the quick and dirty in-place "pre-compute hash and use that u64 as a key in a hashmap" that we've done in the past? Because its wrong! Hashes by themselves aren't enough because two different values can produce the same hash. Re-hashing a hash is even worse! I decided to build a generalized solution because this pattern has come up in the past and we've chosen to do the wrong thing. Now we can do the right thing! This did unfortunately require pulling in `hashbrown` and using that in `bevy_utils`, because avoiding re-hashes requires the `raw_entry_mut` api, which isn't stabilized yet (and may never be ... `entry_ref` has favor now, but also isn't available yet). If std's HashMap ever provides the tools we need, we can move back to that. Note that adding `hashbrown` doesn't increase our dependency count because it was already in our tree. I will probably break these changes out into their own PR.
* Specializing on `MeshVertexBufferLayout` has one non-obvious behavior: it can produce identical pipelines for two different MeshVertexBufferLayouts. To optimize the number of active pipelines / reduce re-binds while drawing, I de-duplicate pipelines post-specialization using the final `VertexBufferLayout` as the key. For example, consider a pipeline that needs the layout `(position, normal)` and is specialized using two meshes: `(position, normal, uv)` and `(position, normal, other_vec2)`. If both of these meshes result in `(position, normal)` specializations, we can use the same pipeline! Now we do. Cool!
To briefly illustrate, this is what the relevant section of `MeshPipeline`'s specialization code looks like now:
```rust
impl SpecializedMeshPipeline for MeshPipeline {
type Key = MeshPipelineKey;
fn specialize(
&self,
key: Self::Key,
layout: &MeshVertexBufferLayout,
) -> RenderPipelineDescriptor {
let mut vertex_attributes = vec![
Mesh::ATTRIBUTE_POSITION.at_shader_location(0),
Mesh::ATTRIBUTE_NORMAL.at_shader_location(1),
Mesh::ATTRIBUTE_UV_0.at_shader_location(2),
];
let mut shader_defs = Vec::new();
if layout.contains(Mesh::ATTRIBUTE_TANGENT) {
shader_defs.push(String::from("VERTEX_TANGENTS"));
vertex_attributes.push(Mesh::ATTRIBUTE_TANGENT.at_shader_location(3));
}
let vertex_buffer_layout = layout
.get_layout(&vertex_attributes)
.expect("Mesh is missing a vertex attribute");
```
Notice that this is _much_ simpler than it was before. And now any mesh with any layout can be used with this pipeline, provided it has vertex postions, normals, and uvs. We even got to remove `HAS_TANGENTS` from MeshPipelineKey and `has_tangents` from `GpuMesh`, because that information is redundant with `MeshVertexBufferLayout`.
This is still a draft because I still need to:
* Add more docs
* Experiment with adding error handling to mesh pipeline specialization (which would print errors at runtime when a mesh is missing a vertex attribute required by a pipeline). If it doesn't tank perf, we'll keep it.
* Consider breaking out the PreHash / hashbrown changes into a separate PR.
* Add an example illustrating this change
* Verify that the "mesh-specialized pipeline de-duplication code" works properly
Please dont yell at me for not doing these things yet :) Just trying to get this in peoples' hands asap.
Alternative to #3120
Fixes #3030
Co-authored-by: Carter Anderson <mcanders1@gmail.com>
2022-02-23 23:21:13 +00:00
|
|
|
vertex_attributes.push(Mesh::ATTRIBUTE_TANGENT.at_shader_location(3));
|
Shader Imports. Decouple Mesh logic from PBR (#3137)
## Shader Imports
This adds "whole file" shader imports. These come in two flavors:
### Asset Path Imports
```rust
// /assets/shaders/custom.wgsl
#import "shaders/custom_material.wgsl"
[[stage(fragment)]]
fn fragment() -> [[location(0)]] vec4<f32> {
return get_color();
}
```
```rust
// /assets/shaders/custom_material.wgsl
[[block]]
struct CustomMaterial {
color: vec4<f32>;
};
[[group(1), binding(0)]]
var<uniform> material: CustomMaterial;
```
### Custom Path Imports
Enables defining custom import paths. These are intended to be used by crates to export shader functionality:
```rust
// bevy_pbr2/src/render/pbr.wgsl
#import bevy_pbr::mesh_view_bind_group
#import bevy_pbr::mesh_bind_group
[[block]]
struct StandardMaterial {
base_color: vec4<f32>;
emissive: vec4<f32>;
perceptual_roughness: f32;
metallic: f32;
reflectance: f32;
flags: u32;
};
/* rest of PBR fragment shader here */
```
```rust
impl Plugin for MeshRenderPlugin {
fn build(&self, app: &mut bevy_app::App) {
let mut shaders = app.world.get_resource_mut::<Assets<Shader>>().unwrap();
shaders.set_untracked(
MESH_BIND_GROUP_HANDLE,
Shader::from_wgsl(include_str!("mesh_bind_group.wgsl"))
.with_import_path("bevy_pbr::mesh_bind_group"),
);
shaders.set_untracked(
MESH_VIEW_BIND_GROUP_HANDLE,
Shader::from_wgsl(include_str!("mesh_view_bind_group.wgsl"))
.with_import_path("bevy_pbr::mesh_view_bind_group"),
);
```
By convention these should use rust-style module paths that start with the crate name. Ultimately we might enforce this convention.
Note that this feature implements _run time_ import resolution. Ultimately we should move the import logic into an asset preprocessor once Bevy gets support for that.
## Decouple Mesh Logic from PBR Logic via MeshRenderPlugin
This breaks out mesh rendering code from PBR material code, which improves the legibility of the code, decouples mesh logic from PBR logic, and opens the door for a future `MaterialPlugin<T: Material>` that handles all of the pipeline setup for arbitrary shader materials.
## Removed `RenderAsset<Shader>` in favor of extracting shaders into RenderPipelineCache
This simplifies the shader import implementation and removes the need to pass around `RenderAssets<Shader>`.
## RenderCommands are now fallible
This allows us to cleanly handle pipelines+shaders not being ready yet. We can abort a render command early in these cases, preventing bevy from trying to bind group / do draw calls for pipelines that couldn't be bound. This could also be used in the future for things like "components not existing on entities yet".
# Next Steps
* Investigate using Naga for "partial typed imports" (ex: `#import bevy_pbr::material::StandardMaterial`, which would import only the StandardMaterial struct)
* Implement `MaterialPlugin<T: Material>` for low-boilerplate custom material shaders
* Move shader import logic into the asset preprocessor once bevy gets support for that.
Fixes #3132
2021-11-18 03:45:02 +00:00
|
|
|
}
|
|
|
|
|
2022-05-05 00:46:32 +00:00
|
|
|
if layout.contains(Mesh::ATTRIBUTE_COLOR) {
|
2022-11-21 22:38:29 +00:00
|
|
|
shader_defs.push("VERTEX_COLORS".into());
|
2022-05-05 00:46:32 +00:00
|
|
|
vertex_attributes.push(Mesh::ATTRIBUTE_COLOR.at_shader_location(4));
|
|
|
|
}
|
|
|
|
|
2023-01-19 22:11:13 +00:00
|
|
|
let mut bind_group_layout = match key.msaa_samples() {
|
|
|
|
1 => vec![self.view_layout.clone()],
|
|
|
|
_ => {
|
|
|
|
shader_defs.push("MULTISAMPLED".into());
|
|
|
|
vec![self.view_layout_multisampled.clone()]
|
|
|
|
}
|
|
|
|
};
|
|
|
|
|
Add morph targets (#8158)
# Objective
- Add morph targets to `bevy_pbr` (closes #5756) & load them from glTF
- Supersedes #3722
- Fixes #6814
[Morph targets][1] (also known as shape interpolation, shape keys, or
blend shapes) allow animating individual vertices with fine grained
controls. This is typically used for facial expressions. By specifying
multiple poses as vertex offset, and providing a set of weight of each
pose, it is possible to define surprisingly realistic transitions
between poses. Blending between multiple poses also allow composition.
Morph targets are part of the [gltf standard][2] and are a feature of
Unity and Unreal, and babylone.js, it is only natural to implement them
in bevy.
## Solution
This implementation of morph targets uses a 3d texture where each pixel
is a component of an animated attribute. Each layer is a different
target. We use a 2d texture for each target, because the number of
attribute×components×animated vertices is expected to always exceed the
maximum pixel row size limit of webGL2. It copies fairly closely the way
skinning is implemented on the CPU side, while on the GPU side, the
shader morph target implementation is a relatively trivial detail.
We add an optional `morph_texture` to the `Mesh` struct. The
`morph_texture` is built through a method that accepts an iterator over
attribute buffers.
The `MorphWeights` component, user-accessible, controls the blend of
poses used by mesh instances (so that multiple copy of the same mesh may
have different weights), all the weights are uploaded to a uniform
buffer of 256 `f32`. We limit to 16 poses per mesh, and a total of 256
poses.
More literature:
* Old babylone.js implementation (vertex attribute-based):
https://www.eternalcoding.com/dev-log-1-morph-targets/
* Babylone.js implementation (similar to ours):
https://www.youtube.com/watch?v=LBPRmGgU0PE
* GPU gems 3:
https://developer.nvidia.com/gpugems/gpugems3/part-i-geometry/chapter-3-directx-10-blend-shapes-breaking-limits
* Development discord thread
https://discord.com/channels/691052431525675048/1083325980615114772
https://user-images.githubusercontent.com/26321040/231181046-3bca2ab2-d4d9-472e-8098-639f1871ce2e.mp4
https://github.com/bevyengine/bevy/assets/26321040/d2a0c544-0ef8-45cf-9f99-8c3792f5a258
## Acknowledgements
* Thanks to `storytold` for sponsoring the feature
* Thanks to `superdump` and `james7132` for guidance and help figuring
out stuff
## Future work
- Handling of less and more attributes (eg: animated uv, animated
arbitrary attributes)
- Dynamic pose allocation (so that zero-weighted poses aren't uploaded
to GPU for example, enables much more total poses)
- Better animation API, see #8357
----
## Changelog
- Add morph targets to bevy meshes
- Support up to 64 poses per mesh of individually up to 116508 vertices,
animation currently strictly limited to the position, normal and tangent
attributes.
- Load a morph target using `Mesh::set_morph_targets`
- Add `VisitMorphTargets` and `VisitMorphAttributes` traits to
`bevy_render`, this allows defining morph targets (a fairly complex and
nested data structure) through iterators (ie: single copy instead of
passing around buffers), see documentation of those traits for details
- Add `MorphWeights` component exported by `bevy_render`
- `MorphWeights` control mesh's morph target weights, blending between
various poses defined as morph targets.
- `MorphWeights` are directly inherited by direct children (single level
of hierarchy) of an entity. This allows controlling several mesh
primitives through a unique entity _as per GLTF spec_.
- Add `MorphTargetNames` component, naming each indices of loaded morph
targets.
- Load morph targets weights and buffers in `bevy_gltf`
- handle morph targets animations in `bevy_animation` (previously, it
was a `warn!` log)
- Add the `MorphStressTest.gltf` asset for morph targets testing, taken
from the glTF samples repo, CC0.
- Add morph target manipulation to `scene_viewer`
- Separate the animation code in `scene_viewer` from the rest of the
code, reducing `#[cfg(feature)]` noise
- Add the `morph_targets.rs` example to show off how to manipulate morph
targets, loading `MorpStressTest.gltf`
## Migration Guide
- (very specialized, unlikely to be touched by 3rd parties)
- `MeshPipeline` now has a single `mesh_layouts` field rather than
separate `mesh_layout` and `skinned_mesh_layout` fields. You should
handle all possible mesh bind group layouts in your implementation
- You should also handle properly the new `MORPH_TARGETS` shader def and
mesh pipeline key. A new function is exposed to make this easier:
`setup_moprh_and_skinning_defs`
- The `MeshBindGroup` is now `MeshBindGroups`, cached bind groups are
now accessed through the `get` method.
[1]: https://en.wikipedia.org/wiki/Morph_target_animation
[2]:
https://registry.khronos.org/glTF/specs/2.0/glTF-2.0.html#morph-targets
---------
Co-authored-by: François <mockersf@gmail.com>
Co-authored-by: Carter Anderson <mcanders1@gmail.com>
2023-06-22 20:00:01 +00:00
|
|
|
bind_group_layout.push(setup_morph_and_skinning_defs(
|
|
|
|
&self.mesh_layouts,
|
|
|
|
layout,
|
|
|
|
5,
|
|
|
|
&key,
|
|
|
|
&mut shader_defs,
|
|
|
|
&mut vertex_attributes,
|
|
|
|
));
|
2022-03-29 18:31:13 +00:00
|
|
|
|
Screen Space Ambient Occlusion (SSAO) MVP (#7402)
![image](https://github.com/bevyengine/bevy/assets/47158642/dbb62645-f639-4f2b-b84b-26fd915c186d)
# Objective
- Add Screen space ambient occlusion (SSAO). SSAO approximates
small-scale, local occlusion of _indirect_ diffuse light between
objects. SSAO does not apply to direct lighting, such as point or
directional lights.
- This darkens creases, e.g. on staircases, and gives nice contact
shadows where objects meet, giving entities a more "grounded" feel.
- Closes https://github.com/bevyengine/bevy/issues/3632.
## Solution
- Implement the GTAO algorithm.
-
https://www.activision.com/cdn/research/Practical_Real_Time_Strategies_for_Accurate_Indirect_Occlusion_NEW%20VERSION_COLOR.pdf
-
https://blog.selfshadow.com/publications/s2016-shading-course/activision/s2016_pbs_activision_occlusion.pdf
- Source code heavily based on [Intel's
XeGTAO](https://github.com/GameTechDev/XeGTAO/blob/0d177ce06bfa642f64d8af4de1197ad1bcb862d4/Source/Rendering/Shaders/XeGTAO.hlsli).
- Add an SSAO bevy example.
## Algorithm Overview
* Run a depth and normal prepass
* Create downscaled mips of the depth texture (preprocess_depths pass)
* GTAO pass - for each pixel, take several random samples from the
depth+normal buffers, reconstruct world position, raytrace in screen
space to estimate occlusion. Rather then doing completely random samples
on a hemisphere, you choose random _slices_ of the hemisphere, and then
can analytically compute the full occlusion of that slice. Also compute
edges based on depth differences here.
* Spatial denoise pass - bilateral blur, using edge detection to not
blur over edges. This is the final SSAO result.
* Main pass - if SSAO exists, sample the SSAO texture, and set occlusion
to be the minimum of ssao/material occlusion. This then feeds into the
rest of the PBR shader as normal.
---
## Future Improvements
- Maybe remove the low quality preset for now (too noisy)
- WebGPU fallback (see below)
- Faster depth->world position (see reverted code)
- Bent normals
- Try interleaved gradient noise or spatiotemporal blue noise
- Replace the spatial denoiser with a combined spatial+temporal denoiser
- Render at half resolution and use a bilateral upsample
- Better multibounce approximation
(https://drive.google.com/file/d/1SyagcEVplIm2KkRD3WQYSO9O0Iyi1hfy/view)
## Far-Future Performance Improvements
- F16 math (missing naga-wgsl support
https://github.com/gfx-rs/naga/issues/1884)
- Faster coordinate space conversion for normals
- Faster depth mipchain creation
(https://github.com/GPUOpen-Effects/FidelityFX-SPD) (wgpu/naga does not
currently support subgroup ops)
- Deinterleaved SSAO for better cache efficiency
(https://developer.nvidia.com/sites/default/files/akamai/gameworks/samples/DeinterleavedTexturing.pdf)
## Other Interesting Papers
- Visibility bitmask
(https://link.springer.com/article/10.1007/s00371-022-02703-y,
https://cdrinmatane.github.io/posts/cgspotlight-slides/)
- Screen space diffuse lighting
(https://github.com/Patapom/GodComplex/blob/master/Tests/TestHBIL/2018%20Mayaux%20-%20Horizon-Based%20Indirect%20Lighting%20(HBIL).pdf)
## Platform Support
* SSAO currently does not work on DirectX12 due to issues with wgpu and
naga:
* https://github.com/gfx-rs/wgpu/pull/3798
* https://github.com/gfx-rs/naga/pull/2353
* SSAO currently does not work on WebGPU because r16float is not a valid
storage texture format
https://gpuweb.github.io/gpuweb/wgsl/#storage-texel-formats. We can fix
this with a fallback to r32float.
---
## Changelog
- Added ScreenSpaceAmbientOcclusionSettings,
ScreenSpaceAmbientOcclusionQualityLevel, and
ScreenSpaceAmbientOcclusionBundle
---------
Co-authored-by: IceSentry <c.giguere42@gmail.com>
Co-authored-by: IceSentry <IceSentry@users.noreply.github.com>
Co-authored-by: Daniel Chia <danstryder@gmail.com>
Co-authored-by: Elabajaba <Elabajaba@users.noreply.github.com>
Co-authored-by: Robert Swain <robert.swain@gmail.com>
Co-authored-by: robtfm <50659922+robtfm@users.noreply.github.com>
Co-authored-by: Brandon Dyer <brandondyer64@gmail.com>
Co-authored-by: Edgar Geier <geieredgar@gmail.com>
Co-authored-by: Nicola Papale <nicopap@users.noreply.github.com>
Co-authored-by: Carter Anderson <mcanders1@gmail.com>
2023-06-18 21:05:55 +00:00
|
|
|
if key.contains(MeshPipelineKey::SCREEN_SPACE_AMBIENT_OCCLUSION) {
|
|
|
|
shader_defs.push("SCREEN_SPACE_AMBIENT_OCCLUSION".into());
|
|
|
|
}
|
|
|
|
|
Mesh vertex buffer layouts (#3959)
This PR makes a number of changes to how meshes and vertex attributes are handled, which the goal of enabling easy and flexible custom vertex attributes:
* Reworks the `Mesh` type to use the newly added `VertexAttribute` internally
* `VertexAttribute` defines the name, a unique `VertexAttributeId`, and a `VertexFormat`
* `VertexAttributeId` is used to produce consistent sort orders for vertex buffer generation, replacing the more expensive and often surprising "name based sorting"
* Meshes can be used to generate a `MeshVertexBufferLayout`, which defines the layout of the gpu buffer produced by the mesh. `MeshVertexBufferLayouts` can then be used to generate actual `VertexBufferLayouts` according to the requirements of a specific pipeline. This decoupling of "mesh layout" vs "pipeline vertex buffer layout" is what enables custom attributes. We don't need to standardize _mesh layouts_ or contort meshes to meet the needs of a specific pipeline. As long as the mesh has what the pipeline needs, it will work transparently.
* Mesh-based pipelines now specialize on `&MeshVertexBufferLayout` via the new `SpecializedMeshPipeline` trait (which behaves like `SpecializedPipeline`, but adds `&MeshVertexBufferLayout`). The integrity of the pipeline cache is maintained because the `MeshVertexBufferLayout` is treated as part of the key (which is fully abstracted from implementers of the trait ... no need to add any additional info to the specialization key).
* Hashing `MeshVertexBufferLayout` is too expensive to do for every entity, every frame. To make this scalable, I added a generalized "pre-hashing" solution to `bevy_utils`: `Hashed<T>` keys and `PreHashMap<K, V>` (which uses `Hashed<T>` internally) . Why didn't I just do the quick and dirty in-place "pre-compute hash and use that u64 as a key in a hashmap" that we've done in the past? Because its wrong! Hashes by themselves aren't enough because two different values can produce the same hash. Re-hashing a hash is even worse! I decided to build a generalized solution because this pattern has come up in the past and we've chosen to do the wrong thing. Now we can do the right thing! This did unfortunately require pulling in `hashbrown` and using that in `bevy_utils`, because avoiding re-hashes requires the `raw_entry_mut` api, which isn't stabilized yet (and may never be ... `entry_ref` has favor now, but also isn't available yet). If std's HashMap ever provides the tools we need, we can move back to that. Note that adding `hashbrown` doesn't increase our dependency count because it was already in our tree. I will probably break these changes out into their own PR.
* Specializing on `MeshVertexBufferLayout` has one non-obvious behavior: it can produce identical pipelines for two different MeshVertexBufferLayouts. To optimize the number of active pipelines / reduce re-binds while drawing, I de-duplicate pipelines post-specialization using the final `VertexBufferLayout` as the key. For example, consider a pipeline that needs the layout `(position, normal)` and is specialized using two meshes: `(position, normal, uv)` and `(position, normal, other_vec2)`. If both of these meshes result in `(position, normal)` specializations, we can use the same pipeline! Now we do. Cool!
To briefly illustrate, this is what the relevant section of `MeshPipeline`'s specialization code looks like now:
```rust
impl SpecializedMeshPipeline for MeshPipeline {
type Key = MeshPipelineKey;
fn specialize(
&self,
key: Self::Key,
layout: &MeshVertexBufferLayout,
) -> RenderPipelineDescriptor {
let mut vertex_attributes = vec![
Mesh::ATTRIBUTE_POSITION.at_shader_location(0),
Mesh::ATTRIBUTE_NORMAL.at_shader_location(1),
Mesh::ATTRIBUTE_UV_0.at_shader_location(2),
];
let mut shader_defs = Vec::new();
if layout.contains(Mesh::ATTRIBUTE_TANGENT) {
shader_defs.push(String::from("VERTEX_TANGENTS"));
vertex_attributes.push(Mesh::ATTRIBUTE_TANGENT.at_shader_location(3));
}
let vertex_buffer_layout = layout
.get_layout(&vertex_attributes)
.expect("Mesh is missing a vertex attribute");
```
Notice that this is _much_ simpler than it was before. And now any mesh with any layout can be used with this pipeline, provided it has vertex postions, normals, and uvs. We even got to remove `HAS_TANGENTS` from MeshPipelineKey and `has_tangents` from `GpuMesh`, because that information is redundant with `MeshVertexBufferLayout`.
This is still a draft because I still need to:
* Add more docs
* Experiment with adding error handling to mesh pipeline specialization (which would print errors at runtime when a mesh is missing a vertex attribute required by a pipeline). If it doesn't tank perf, we'll keep it.
* Consider breaking out the PreHash / hashbrown changes into a separate PR.
* Add an example illustrating this change
* Verify that the "mesh-specialized pipeline de-duplication code" works properly
Please dont yell at me for not doing these things yet :) Just trying to get this in peoples' hands asap.
Alternative to #3120
Fixes #3030
Co-authored-by: Carter Anderson <mcanders1@gmail.com>
2022-02-23 23:21:13 +00:00
|
|
|
let vertex_buffer_layout = layout.get_layout(&vertex_attributes)?;
|
|
|
|
|
Shader Imports. Decouple Mesh logic from PBR (#3137)
## Shader Imports
This adds "whole file" shader imports. These come in two flavors:
### Asset Path Imports
```rust
// /assets/shaders/custom.wgsl
#import "shaders/custom_material.wgsl"
[[stage(fragment)]]
fn fragment() -> [[location(0)]] vec4<f32> {
return get_color();
}
```
```rust
// /assets/shaders/custom_material.wgsl
[[block]]
struct CustomMaterial {
color: vec4<f32>;
};
[[group(1), binding(0)]]
var<uniform> material: CustomMaterial;
```
### Custom Path Imports
Enables defining custom import paths. These are intended to be used by crates to export shader functionality:
```rust
// bevy_pbr2/src/render/pbr.wgsl
#import bevy_pbr::mesh_view_bind_group
#import bevy_pbr::mesh_bind_group
[[block]]
struct StandardMaterial {
base_color: vec4<f32>;
emissive: vec4<f32>;
perceptual_roughness: f32;
metallic: f32;
reflectance: f32;
flags: u32;
};
/* rest of PBR fragment shader here */
```
```rust
impl Plugin for MeshRenderPlugin {
fn build(&self, app: &mut bevy_app::App) {
let mut shaders = app.world.get_resource_mut::<Assets<Shader>>().unwrap();
shaders.set_untracked(
MESH_BIND_GROUP_HANDLE,
Shader::from_wgsl(include_str!("mesh_bind_group.wgsl"))
.with_import_path("bevy_pbr::mesh_bind_group"),
);
shaders.set_untracked(
MESH_VIEW_BIND_GROUP_HANDLE,
Shader::from_wgsl(include_str!("mesh_view_bind_group.wgsl"))
.with_import_path("bevy_pbr::mesh_view_bind_group"),
);
```
By convention these should use rust-style module paths that start with the crate name. Ultimately we might enforce this convention.
Note that this feature implements _run time_ import resolution. Ultimately we should move the import logic into an asset preprocessor once Bevy gets support for that.
## Decouple Mesh Logic from PBR Logic via MeshRenderPlugin
This breaks out mesh rendering code from PBR material code, which improves the legibility of the code, decouples mesh logic from PBR logic, and opens the door for a future `MaterialPlugin<T: Material>` that handles all of the pipeline setup for arbitrary shader materials.
## Removed `RenderAsset<Shader>` in favor of extracting shaders into RenderPipelineCache
This simplifies the shader import implementation and removes the need to pass around `RenderAssets<Shader>`.
## RenderCommands are now fallible
This allows us to cleanly handle pipelines+shaders not being ready yet. We can abort a render command early in these cases, preventing bevy from trying to bind group / do draw calls for pipelines that couldn't be bound. This could also be used in the future for things like "components not existing on entities yet".
# Next Steps
* Investigate using Naga for "partial typed imports" (ex: `#import bevy_pbr::material::StandardMaterial`, which would import only the StandardMaterial struct)
* Implement `MaterialPlugin<T: Material>` for low-boilerplate custom material shaders
* Move shader import logic into the asset preprocessor once bevy gets support for that.
Fixes #3132
2021-11-18 03:45:02 +00:00
|
|
|
let (label, blend, depth_write_enabled);
|
2023-01-21 21:46:53 +00:00
|
|
|
let pass = key.intersection(MeshPipelineKey::BLEND_RESERVED_BITS);
|
2023-06-21 17:25:20 +00:00
|
|
|
let mut is_opaque = false;
|
2023-03-05 00:17:44 +00:00
|
|
|
if pass == MeshPipelineKey::BLEND_ALPHA {
|
|
|
|
label = "alpha_blend_mesh_pipeline".into();
|
|
|
|
blend = Some(BlendState::ALPHA_BLENDING);
|
|
|
|
// For the transparent pass, fragments that are closer will be alpha blended
|
|
|
|
// but their depth is not written to the depth buffer
|
|
|
|
depth_write_enabled = false;
|
|
|
|
} else if pass == MeshPipelineKey::BLEND_PREMULTIPLIED_ALPHA {
|
2023-01-21 21:46:53 +00:00
|
|
|
label = "premultiplied_alpha_mesh_pipeline".into();
|
|
|
|
blend = Some(BlendState::PREMULTIPLIED_ALPHA_BLENDING);
|
|
|
|
shader_defs.push("PREMULTIPLY_ALPHA".into());
|
|
|
|
shader_defs.push("BLEND_PREMULTIPLIED_ALPHA".into());
|
Shader Imports. Decouple Mesh logic from PBR (#3137)
## Shader Imports
This adds "whole file" shader imports. These come in two flavors:
### Asset Path Imports
```rust
// /assets/shaders/custom.wgsl
#import "shaders/custom_material.wgsl"
[[stage(fragment)]]
fn fragment() -> [[location(0)]] vec4<f32> {
return get_color();
}
```
```rust
// /assets/shaders/custom_material.wgsl
[[block]]
struct CustomMaterial {
color: vec4<f32>;
};
[[group(1), binding(0)]]
var<uniform> material: CustomMaterial;
```
### Custom Path Imports
Enables defining custom import paths. These are intended to be used by crates to export shader functionality:
```rust
// bevy_pbr2/src/render/pbr.wgsl
#import bevy_pbr::mesh_view_bind_group
#import bevy_pbr::mesh_bind_group
[[block]]
struct StandardMaterial {
base_color: vec4<f32>;
emissive: vec4<f32>;
perceptual_roughness: f32;
metallic: f32;
reflectance: f32;
flags: u32;
};
/* rest of PBR fragment shader here */
```
```rust
impl Plugin for MeshRenderPlugin {
fn build(&self, app: &mut bevy_app::App) {
let mut shaders = app.world.get_resource_mut::<Assets<Shader>>().unwrap();
shaders.set_untracked(
MESH_BIND_GROUP_HANDLE,
Shader::from_wgsl(include_str!("mesh_bind_group.wgsl"))
.with_import_path("bevy_pbr::mesh_bind_group"),
);
shaders.set_untracked(
MESH_VIEW_BIND_GROUP_HANDLE,
Shader::from_wgsl(include_str!("mesh_view_bind_group.wgsl"))
.with_import_path("bevy_pbr::mesh_view_bind_group"),
);
```
By convention these should use rust-style module paths that start with the crate name. Ultimately we might enforce this convention.
Note that this feature implements _run time_ import resolution. Ultimately we should move the import logic into an asset preprocessor once Bevy gets support for that.
## Decouple Mesh Logic from PBR Logic via MeshRenderPlugin
This breaks out mesh rendering code from PBR material code, which improves the legibility of the code, decouples mesh logic from PBR logic, and opens the door for a future `MaterialPlugin<T: Material>` that handles all of the pipeline setup for arbitrary shader materials.
## Removed `RenderAsset<Shader>` in favor of extracting shaders into RenderPipelineCache
This simplifies the shader import implementation and removes the need to pass around `RenderAssets<Shader>`.
## RenderCommands are now fallible
This allows us to cleanly handle pipelines+shaders not being ready yet. We can abort a render command early in these cases, preventing bevy from trying to bind group / do draw calls for pipelines that couldn't be bound. This could also be used in the future for things like "components not existing on entities yet".
# Next Steps
* Investigate using Naga for "partial typed imports" (ex: `#import bevy_pbr::material::StandardMaterial`, which would import only the StandardMaterial struct)
* Implement `MaterialPlugin<T: Material>` for low-boilerplate custom material shaders
* Move shader import logic into the asset preprocessor once bevy gets support for that.
Fixes #3132
2021-11-18 03:45:02 +00:00
|
|
|
// For the transparent pass, fragments that are closer will be alpha blended
|
|
|
|
// but their depth is not written to the depth buffer
|
|
|
|
depth_write_enabled = false;
|
2023-01-21 21:46:53 +00:00
|
|
|
} else if pass == MeshPipelineKey::BLEND_MULTIPLY {
|
|
|
|
label = "multiply_mesh_pipeline".into();
|
|
|
|
blend = Some(BlendState {
|
|
|
|
color: BlendComponent {
|
|
|
|
src_factor: BlendFactor::Dst,
|
|
|
|
dst_factor: BlendFactor::OneMinusSrcAlpha,
|
|
|
|
operation: BlendOperation::Add,
|
|
|
|
},
|
|
|
|
alpha: BlendComponent::OVER,
|
|
|
|
});
|
|
|
|
shader_defs.push("PREMULTIPLY_ALPHA".into());
|
|
|
|
shader_defs.push("BLEND_MULTIPLY".into());
|
|
|
|
// For the multiply pass, fragments that are closer will be alpha blended
|
|
|
|
// but their depth is not written to the depth buffer
|
|
|
|
depth_write_enabled = false;
|
Shader Imports. Decouple Mesh logic from PBR (#3137)
## Shader Imports
This adds "whole file" shader imports. These come in two flavors:
### Asset Path Imports
```rust
// /assets/shaders/custom.wgsl
#import "shaders/custom_material.wgsl"
[[stage(fragment)]]
fn fragment() -> [[location(0)]] vec4<f32> {
return get_color();
}
```
```rust
// /assets/shaders/custom_material.wgsl
[[block]]
struct CustomMaterial {
color: vec4<f32>;
};
[[group(1), binding(0)]]
var<uniform> material: CustomMaterial;
```
### Custom Path Imports
Enables defining custom import paths. These are intended to be used by crates to export shader functionality:
```rust
// bevy_pbr2/src/render/pbr.wgsl
#import bevy_pbr::mesh_view_bind_group
#import bevy_pbr::mesh_bind_group
[[block]]
struct StandardMaterial {
base_color: vec4<f32>;
emissive: vec4<f32>;
perceptual_roughness: f32;
metallic: f32;
reflectance: f32;
flags: u32;
};
/* rest of PBR fragment shader here */
```
```rust
impl Plugin for MeshRenderPlugin {
fn build(&self, app: &mut bevy_app::App) {
let mut shaders = app.world.get_resource_mut::<Assets<Shader>>().unwrap();
shaders.set_untracked(
MESH_BIND_GROUP_HANDLE,
Shader::from_wgsl(include_str!("mesh_bind_group.wgsl"))
.with_import_path("bevy_pbr::mesh_bind_group"),
);
shaders.set_untracked(
MESH_VIEW_BIND_GROUP_HANDLE,
Shader::from_wgsl(include_str!("mesh_view_bind_group.wgsl"))
.with_import_path("bevy_pbr::mesh_view_bind_group"),
);
```
By convention these should use rust-style module paths that start with the crate name. Ultimately we might enforce this convention.
Note that this feature implements _run time_ import resolution. Ultimately we should move the import logic into an asset preprocessor once Bevy gets support for that.
## Decouple Mesh Logic from PBR Logic via MeshRenderPlugin
This breaks out mesh rendering code from PBR material code, which improves the legibility of the code, decouples mesh logic from PBR logic, and opens the door for a future `MaterialPlugin<T: Material>` that handles all of the pipeline setup for arbitrary shader materials.
## Removed `RenderAsset<Shader>` in favor of extracting shaders into RenderPipelineCache
This simplifies the shader import implementation and removes the need to pass around `RenderAssets<Shader>`.
## RenderCommands are now fallible
This allows us to cleanly handle pipelines+shaders not being ready yet. We can abort a render command early in these cases, preventing bevy from trying to bind group / do draw calls for pipelines that couldn't be bound. This could also be used in the future for things like "components not existing on entities yet".
# Next Steps
* Investigate using Naga for "partial typed imports" (ex: `#import bevy_pbr::material::StandardMaterial`, which would import only the StandardMaterial struct)
* Implement `MaterialPlugin<T: Material>` for low-boilerplate custom material shaders
* Move shader import logic into the asset preprocessor once bevy gets support for that.
Fixes #3132
2021-11-18 03:45:02 +00:00
|
|
|
} else {
|
|
|
|
label = "opaque_mesh_pipeline".into();
|
|
|
|
blend = Some(BlendState::REPLACE);
|
|
|
|
// For the opaque and alpha mask passes, fragments that are closer will replace
|
|
|
|
// the current fragment value in the output and the depth is written to the
|
|
|
|
// depth buffer
|
|
|
|
depth_write_enabled = true;
|
2023-06-21 17:25:20 +00:00
|
|
|
is_opaque = true;
|
|
|
|
}
|
|
|
|
|
|
|
|
if key.contains(MeshPipelineKey::NORMAL_PREPASS) && key.msaa_samples() == 1 && is_opaque {
|
|
|
|
shader_defs.push("LOAD_PREPASS_NORMALS".into());
|
Shader Imports. Decouple Mesh logic from PBR (#3137)
## Shader Imports
This adds "whole file" shader imports. These come in two flavors:
### Asset Path Imports
```rust
// /assets/shaders/custom.wgsl
#import "shaders/custom_material.wgsl"
[[stage(fragment)]]
fn fragment() -> [[location(0)]] vec4<f32> {
return get_color();
}
```
```rust
// /assets/shaders/custom_material.wgsl
[[block]]
struct CustomMaterial {
color: vec4<f32>;
};
[[group(1), binding(0)]]
var<uniform> material: CustomMaterial;
```
### Custom Path Imports
Enables defining custom import paths. These are intended to be used by crates to export shader functionality:
```rust
// bevy_pbr2/src/render/pbr.wgsl
#import bevy_pbr::mesh_view_bind_group
#import bevy_pbr::mesh_bind_group
[[block]]
struct StandardMaterial {
base_color: vec4<f32>;
emissive: vec4<f32>;
perceptual_roughness: f32;
metallic: f32;
reflectance: f32;
flags: u32;
};
/* rest of PBR fragment shader here */
```
```rust
impl Plugin for MeshRenderPlugin {
fn build(&self, app: &mut bevy_app::App) {
let mut shaders = app.world.get_resource_mut::<Assets<Shader>>().unwrap();
shaders.set_untracked(
MESH_BIND_GROUP_HANDLE,
Shader::from_wgsl(include_str!("mesh_bind_group.wgsl"))
.with_import_path("bevy_pbr::mesh_bind_group"),
);
shaders.set_untracked(
MESH_VIEW_BIND_GROUP_HANDLE,
Shader::from_wgsl(include_str!("mesh_view_bind_group.wgsl"))
.with_import_path("bevy_pbr::mesh_view_bind_group"),
);
```
By convention these should use rust-style module paths that start with the crate name. Ultimately we might enforce this convention.
Note that this feature implements _run time_ import resolution. Ultimately we should move the import logic into an asset preprocessor once Bevy gets support for that.
## Decouple Mesh Logic from PBR Logic via MeshRenderPlugin
This breaks out mesh rendering code from PBR material code, which improves the legibility of the code, decouples mesh logic from PBR logic, and opens the door for a future `MaterialPlugin<T: Material>` that handles all of the pipeline setup for arbitrary shader materials.
## Removed `RenderAsset<Shader>` in favor of extracting shaders into RenderPipelineCache
This simplifies the shader import implementation and removes the need to pass around `RenderAssets<Shader>`.
## RenderCommands are now fallible
This allows us to cleanly handle pipelines+shaders not being ready yet. We can abort a render command early in these cases, preventing bevy from trying to bind group / do draw calls for pipelines that couldn't be bound. This could also be used in the future for things like "components not existing on entities yet".
# Next Steps
* Investigate using Naga for "partial typed imports" (ex: `#import bevy_pbr::material::StandardMaterial`, which would import only the StandardMaterial struct)
* Implement `MaterialPlugin<T: Material>` for low-boilerplate custom material shaders
* Move shader import logic into the asset preprocessor once bevy gets support for that.
Fixes #3132
2021-11-18 03:45:02 +00:00
|
|
|
}
|
|
|
|
|
2022-10-26 20:13:59 +00:00
|
|
|
if key.contains(MeshPipelineKey::TONEMAP_IN_SHADER) {
|
2022-11-21 22:38:29 +00:00
|
|
|
shader_defs.push("TONEMAP_IN_SHADER".into());
|
Fix color banding by dithering image before quantization (#5264)
# Objective
- Closes #5262
- Fix color banding caused by quantization.
## Solution
- Adds dithering to the tonemapping node from #3425.
- This is inspired by Godot's default "debanding" shader: https://gist.github.com/belzecue/
- Unlike Godot:
- debanding happens after tonemapping. My understanding is that this is preferred, because we are running the debanding at the last moment before quantization (`[f32, f32, f32, f32]` -> `f32`). This ensures we aren't biasing the dithering strength by applying it in a different (linear) color space.
- This code instead uses and reference the origin source, Valve at GDC 2015
![Screenshot from 2022-11-10 13-44-46](https://user-images.githubusercontent.com/2632925/201218880-70f4cdab-a1ed-44de-a88c-8759e77197f1.png)
![Screenshot from 2022-11-10 13-41-11](https://user-images.githubusercontent.com/2632925/201218883-72393352-b162-41da-88bb-6e54a1e26853.png)
## Additional Notes
Real time rendering to standard dynamic range outputs is limited to 8 bits of depth per color channel. Internally we keep everything in full 32-bit precision (`vec4<f32>`) inside passes and 16-bit between passes until the image is ready to be displayed, at which point the GPU implicitly converts our `vec4<f32>` into a single 32bit value per pixel, with each channel (rgba) getting 8 of those 32 bits.
### The Problem
8 bits of color depth is simply not enough precision to make each step invisible - we only have 256 values per channel! Human vision can perceive steps in luma to about 14 bits of precision. When drawing a very slight gradient, the transition between steps become visible because with a gradient, neighboring pixels will all jump to the next "step" of precision at the same time.
### The Solution
One solution is to simply output in HDR - more bits of color data means the transition between bands will become smaller. However, not everyone has hardware that supports 10+ bit color depth. Additionally, 10 bit color doesn't even fully solve the issue, banding will result in coherent bands on shallow gradients, but the steps will be harder to perceive.
The solution in this PR adds noise to the signal before it is "quantized" or resampled from 32 to 8 bits. Done naively, it's easy to add unneeded noise to the image. To ensure dithering is correct and absolutely minimal, noise is adding *within* one step of the output color depth. When converting from the 32bit to 8bit signal, the value is rounded to the nearest 8 bit value (0 - 255). Banding occurs around the transition from one value to the next, let's say from 50-51. Dithering will never add more than +/-0.5 bits of noise, so the pixels near this transition might round to 50 instead of 51 but will never round more than one step. This means that the output image won't have excess variance:
- in a gradient from 49 to 51, there will be a step between each band at 49, 50, and 51.
- Done correctly, the modified image of this gradient will never have a adjacent pixels more than one step (0-255) from each other.
- I.e. when scanning across the gradient you should expect to see:
```
|-band-| |-band-| |-band-|
Baseline: 49 49 49 50 50 50 51 51 51
Dithered: 49 50 49 50 50 51 50 51 51
Dithered (wrong): 49 50 51 49 50 51 49 51 50
```
![Screenshot from 2022-11-10 14-12-36](https://user-images.githubusercontent.com/2632925/201219075-ab3f46be-d4e9-4869-b66b-a92e1706f49e.png)
![Screenshot from 2022-11-10 14-11-48](https://user-images.githubusercontent.com/2632925/201219079-ec5d2add-817d-487a-8fc1-84569c9cda73.png)
You can see from above how correct dithering "fuzzes" the transition between bands to reduce distinct steps in color, without adding excess noise.
### HDR
The previous section (and this PR) assumes the final output is to an 8-bit texture, however this is not always the case. When Bevy adds HDR support, the dithering code will need to take the per-channel depth into account instead of assuming it to be 0-255. Edit: I talked with Rob about this and it seems like the current solution is okay. We may need to revisit once we have actual HDR final image output.
---
## Changelog
### Added
- All pipelines now support deband dithering. This is enabled by default in 3D, and can be toggled in the `Tonemapping` component in camera bundles. Banding is a graphical artifact created when the rendered image is crunched from high precision (f32 per color channel) down to the final output (u8 per channel in SDR). This results in subtle gradients becoming blocky due to the reduced color precision. Deband dithering applies a small amount of noise to the signal before it is "crunched", which breaks up the hard edges of blocks (bands) of color. Note that this does not add excess noise to the image, as the amount of noise is less than a single step of a color channel - just enough to break up the transition between color blocks in a gradient.
Co-authored-by: Carter Anderson <mcanders1@gmail.com>
2022-11-11 19:43:45 +00:00
|
|
|
|
2023-02-19 20:38:13 +00:00
|
|
|
let method = key.intersection(MeshPipelineKey::TONEMAP_METHOD_RESERVED_BITS);
|
|
|
|
|
|
|
|
if method == MeshPipelineKey::TONEMAP_METHOD_NONE {
|
|
|
|
shader_defs.push("TONEMAP_METHOD_NONE".into());
|
|
|
|
} else if method == MeshPipelineKey::TONEMAP_METHOD_REINHARD {
|
|
|
|
shader_defs.push("TONEMAP_METHOD_REINHARD".into());
|
|
|
|
} else if method == MeshPipelineKey::TONEMAP_METHOD_REINHARD_LUMINANCE {
|
|
|
|
shader_defs.push("TONEMAP_METHOD_REINHARD_LUMINANCE".into());
|
|
|
|
} else if method == MeshPipelineKey::TONEMAP_METHOD_ACES_FITTED {
|
|
|
|
shader_defs.push("TONEMAP_METHOD_ACES_FITTED ".into());
|
|
|
|
} else if method == MeshPipelineKey::TONEMAP_METHOD_AGX {
|
|
|
|
shader_defs.push("TONEMAP_METHOD_AGX".into());
|
|
|
|
} else if method == MeshPipelineKey::TONEMAP_METHOD_SOMEWHAT_BORING_DISPLAY_TRANSFORM {
|
|
|
|
shader_defs.push("TONEMAP_METHOD_SOMEWHAT_BORING_DISPLAY_TRANSFORM".into());
|
|
|
|
} else if method == MeshPipelineKey::TONEMAP_METHOD_BLENDER_FILMIC {
|
|
|
|
shader_defs.push("TONEMAP_METHOD_BLENDER_FILMIC".into());
|
|
|
|
} else if method == MeshPipelineKey::TONEMAP_METHOD_TONY_MC_MAPFACE {
|
|
|
|
shader_defs.push("TONEMAP_METHOD_TONY_MC_MAPFACE".into());
|
|
|
|
}
|
|
|
|
|
Fix color banding by dithering image before quantization (#5264)
# Objective
- Closes #5262
- Fix color banding caused by quantization.
## Solution
- Adds dithering to the tonemapping node from #3425.
- This is inspired by Godot's default "debanding" shader: https://gist.github.com/belzecue/
- Unlike Godot:
- debanding happens after tonemapping. My understanding is that this is preferred, because we are running the debanding at the last moment before quantization (`[f32, f32, f32, f32]` -> `f32`). This ensures we aren't biasing the dithering strength by applying it in a different (linear) color space.
- This code instead uses and reference the origin source, Valve at GDC 2015
![Screenshot from 2022-11-10 13-44-46](https://user-images.githubusercontent.com/2632925/201218880-70f4cdab-a1ed-44de-a88c-8759e77197f1.png)
![Screenshot from 2022-11-10 13-41-11](https://user-images.githubusercontent.com/2632925/201218883-72393352-b162-41da-88bb-6e54a1e26853.png)
## Additional Notes
Real time rendering to standard dynamic range outputs is limited to 8 bits of depth per color channel. Internally we keep everything in full 32-bit precision (`vec4<f32>`) inside passes and 16-bit between passes until the image is ready to be displayed, at which point the GPU implicitly converts our `vec4<f32>` into a single 32bit value per pixel, with each channel (rgba) getting 8 of those 32 bits.
### The Problem
8 bits of color depth is simply not enough precision to make each step invisible - we only have 256 values per channel! Human vision can perceive steps in luma to about 14 bits of precision. When drawing a very slight gradient, the transition between steps become visible because with a gradient, neighboring pixels will all jump to the next "step" of precision at the same time.
### The Solution
One solution is to simply output in HDR - more bits of color data means the transition between bands will become smaller. However, not everyone has hardware that supports 10+ bit color depth. Additionally, 10 bit color doesn't even fully solve the issue, banding will result in coherent bands on shallow gradients, but the steps will be harder to perceive.
The solution in this PR adds noise to the signal before it is "quantized" or resampled from 32 to 8 bits. Done naively, it's easy to add unneeded noise to the image. To ensure dithering is correct and absolutely minimal, noise is adding *within* one step of the output color depth. When converting from the 32bit to 8bit signal, the value is rounded to the nearest 8 bit value (0 - 255). Banding occurs around the transition from one value to the next, let's say from 50-51. Dithering will never add more than +/-0.5 bits of noise, so the pixels near this transition might round to 50 instead of 51 but will never round more than one step. This means that the output image won't have excess variance:
- in a gradient from 49 to 51, there will be a step between each band at 49, 50, and 51.
- Done correctly, the modified image of this gradient will never have a adjacent pixels more than one step (0-255) from each other.
- I.e. when scanning across the gradient you should expect to see:
```
|-band-| |-band-| |-band-|
Baseline: 49 49 49 50 50 50 51 51 51
Dithered: 49 50 49 50 50 51 50 51 51
Dithered (wrong): 49 50 51 49 50 51 49 51 50
```
![Screenshot from 2022-11-10 14-12-36](https://user-images.githubusercontent.com/2632925/201219075-ab3f46be-d4e9-4869-b66b-a92e1706f49e.png)
![Screenshot from 2022-11-10 14-11-48](https://user-images.githubusercontent.com/2632925/201219079-ec5d2add-817d-487a-8fc1-84569c9cda73.png)
You can see from above how correct dithering "fuzzes" the transition between bands to reduce distinct steps in color, without adding excess noise.
### HDR
The previous section (and this PR) assumes the final output is to an 8-bit texture, however this is not always the case. When Bevy adds HDR support, the dithering code will need to take the per-channel depth into account instead of assuming it to be 0-255. Edit: I talked with Rob about this and it seems like the current solution is okay. We may need to revisit once we have actual HDR final image output.
---
## Changelog
### Added
- All pipelines now support deband dithering. This is enabled by default in 3D, and can be toggled in the `Tonemapping` component in camera bundles. Banding is a graphical artifact created when the rendered image is crunched from high precision (f32 per color channel) down to the final output (u8 per channel in SDR). This results in subtle gradients becoming blocky due to the reduced color precision. Deband dithering applies a small amount of noise to the signal before it is "crunched", which breaks up the hard edges of blocks (bands) of color. Note that this does not add excess noise to the image, as the amount of noise is less than a single step of a color channel - just enough to break up the transition between color blocks in a gradient.
Co-authored-by: Carter Anderson <mcanders1@gmail.com>
2022-11-11 19:43:45 +00:00
|
|
|
// Debanding is tied to tonemapping in the shader, cannot run without it.
|
|
|
|
if key.contains(MeshPipelineKey::DEBAND_DITHER) {
|
2022-11-21 22:38:29 +00:00
|
|
|
shader_defs.push("DEBAND_DITHER".into());
|
Fix color banding by dithering image before quantization (#5264)
# Objective
- Closes #5262
- Fix color banding caused by quantization.
## Solution
- Adds dithering to the tonemapping node from #3425.
- This is inspired by Godot's default "debanding" shader: https://gist.github.com/belzecue/
- Unlike Godot:
- debanding happens after tonemapping. My understanding is that this is preferred, because we are running the debanding at the last moment before quantization (`[f32, f32, f32, f32]` -> `f32`). This ensures we aren't biasing the dithering strength by applying it in a different (linear) color space.
- This code instead uses and reference the origin source, Valve at GDC 2015
![Screenshot from 2022-11-10 13-44-46](https://user-images.githubusercontent.com/2632925/201218880-70f4cdab-a1ed-44de-a88c-8759e77197f1.png)
![Screenshot from 2022-11-10 13-41-11](https://user-images.githubusercontent.com/2632925/201218883-72393352-b162-41da-88bb-6e54a1e26853.png)
## Additional Notes
Real time rendering to standard dynamic range outputs is limited to 8 bits of depth per color channel. Internally we keep everything in full 32-bit precision (`vec4<f32>`) inside passes and 16-bit between passes until the image is ready to be displayed, at which point the GPU implicitly converts our `vec4<f32>` into a single 32bit value per pixel, with each channel (rgba) getting 8 of those 32 bits.
### The Problem
8 bits of color depth is simply not enough precision to make each step invisible - we only have 256 values per channel! Human vision can perceive steps in luma to about 14 bits of precision. When drawing a very slight gradient, the transition between steps become visible because with a gradient, neighboring pixels will all jump to the next "step" of precision at the same time.
### The Solution
One solution is to simply output in HDR - more bits of color data means the transition between bands will become smaller. However, not everyone has hardware that supports 10+ bit color depth. Additionally, 10 bit color doesn't even fully solve the issue, banding will result in coherent bands on shallow gradients, but the steps will be harder to perceive.
The solution in this PR adds noise to the signal before it is "quantized" or resampled from 32 to 8 bits. Done naively, it's easy to add unneeded noise to the image. To ensure dithering is correct and absolutely minimal, noise is adding *within* one step of the output color depth. When converting from the 32bit to 8bit signal, the value is rounded to the nearest 8 bit value (0 - 255). Banding occurs around the transition from one value to the next, let's say from 50-51. Dithering will never add more than +/-0.5 bits of noise, so the pixels near this transition might round to 50 instead of 51 but will never round more than one step. This means that the output image won't have excess variance:
- in a gradient from 49 to 51, there will be a step between each band at 49, 50, and 51.
- Done correctly, the modified image of this gradient will never have a adjacent pixels more than one step (0-255) from each other.
- I.e. when scanning across the gradient you should expect to see:
```
|-band-| |-band-| |-band-|
Baseline: 49 49 49 50 50 50 51 51 51
Dithered: 49 50 49 50 50 51 50 51 51
Dithered (wrong): 49 50 51 49 50 51 49 51 50
```
![Screenshot from 2022-11-10 14-12-36](https://user-images.githubusercontent.com/2632925/201219075-ab3f46be-d4e9-4869-b66b-a92e1706f49e.png)
![Screenshot from 2022-11-10 14-11-48](https://user-images.githubusercontent.com/2632925/201219079-ec5d2add-817d-487a-8fc1-84569c9cda73.png)
You can see from above how correct dithering "fuzzes" the transition between bands to reduce distinct steps in color, without adding excess noise.
### HDR
The previous section (and this PR) assumes the final output is to an 8-bit texture, however this is not always the case. When Bevy adds HDR support, the dithering code will need to take the per-channel depth into account instead of assuming it to be 0-255. Edit: I talked with Rob about this and it seems like the current solution is okay. We may need to revisit once we have actual HDR final image output.
---
## Changelog
### Added
- All pipelines now support deband dithering. This is enabled by default in 3D, and can be toggled in the `Tonemapping` component in camera bundles. Banding is a graphical artifact created when the rendered image is crunched from high precision (f32 per color channel) down to the final output (u8 per channel in SDR). This results in subtle gradients becoming blocky due to the reduced color precision. Deband dithering applies a small amount of noise to the signal before it is "crunched", which breaks up the hard edges of blocks (bands) of color. Note that this does not add excess noise to the image, as the amount of noise is less than a single step of a color channel - just enough to break up the transition between color blocks in a gradient.
Co-authored-by: Carter Anderson <mcanders1@gmail.com>
2022-11-11 19:43:45 +00:00
|
|
|
}
|
2022-10-26 20:13:59 +00:00
|
|
|
}
|
|
|
|
|
Add `MAY_DISCARD` shader def, enabling early depth tests for most cases (#6697)
# Objective
- Right now we can't really benefit from [early depth
testing](https://www.khronos.org/opengl/wiki/Early_Fragment_Test) in our
PBR shader because it includes codepaths with `discard`, even for
situations where they are not necessary.
## Solution
- This PR introduces a new `MeshPipelineKey` and shader def,
`MAY_DISCARD`;
- All possible material/mesh options that that may result in `discard`s
being needed must set `MAY_DISCARD` ahead of time:
- Right now, this is only `AlphaMode::Mask(f32)`, but in the future
might include other options/effects; (e.g. one effect I'm personally
interested in is bayer dither pseudo-transparency for LOD transitions of
opaque meshes)
- Shader codepaths that can `discard` are guarded by an `#ifdef
MAY_DISCARD` preprocessor directive:
- Right now, this is just one branch in `alpha_discard()`;
- If `MAY_DISCARD` is _not_ set, the `@early_depth_test` attribute is
added to the PBR fragment shader. This is a not yet documented, possibly
non-standard WGSL extension I found browsing Naga's source code. [I
opened a PR to document it
there](https://github.com/gfx-rs/naga/pull/2132). My understanding is
that for backends where this attribute is supported, it will force an
explicit opt-in to early depth test. (e.g. via
`layout(early_fragment_tests) in;` in GLSL)
## Caveats
- I included `@early_depth_test` for the sake of us being explicit, and
avoiding the need for the driver to be “smart” about enabling this
feature. That way, if we make a mistake and include a `discard`
unguarded by `MAY_DISCARD`, it will either produce errors or noticeable
visual artifacts so that we'll catch early, instead of causing a
performance regression.
- I'm not sure explicit early depth test is supported on the naga Metal
backend, which is what I'm currently using, so I can't really test the
explicit early depth test enable, I would like others with Vulkan/GL
hardware to test it if possible;
- I would like some guidance on how to measure/verify the performance
benefits of this;
- If I understand it correctly, this, or _something like this_ is needed
to fully reap the performance gains enabled by #6284;
- This will _most definitely_ conflict with #6284 and #6644. I can fix
the conflicts as needed, depending on whether/the order they end up
being merging in.
---
## Changelog
### Changed
- Early depth tests are now enabled whenever possible for meshes using
`StandardMaterial`, reducing the number of fragments evaluated for
scenes with lots of occlusions.
2023-05-29 15:15:01 +00:00
|
|
|
if key.contains(MeshPipelineKey::MAY_DISCARD) {
|
|
|
|
shader_defs.push("MAY_DISCARD".into());
|
|
|
|
}
|
|
|
|
|
EnvironmentMapLight, BRDF Improvements (#7051)
(Before)
![image](https://user-images.githubusercontent.com/47158642/213946111-15ec758f-1f1d-443c-b196-1fdcd4ae49da.png)
(After)
![image](https://user-images.githubusercontent.com/47158642/217051179-67381e73-dd44-461b-a2c7-87b0440ef8de.png)
![image](https://user-images.githubusercontent.com/47158642/212492404-524e4ad3-7837-4ed4-8b20-2abc276aa8e8.png)
# Objective
- Improve lighting; especially reflections.
- Closes https://github.com/bevyengine/bevy/issues/4581.
## Solution
- Implement environment maps, providing better ambient light.
- Add microfacet multibounce approximation for specular highlights from Filament.
- Occlusion is no longer incorrectly applied to direct lighting. It now only applies to diffuse indirect light. Unsure if it's also supposed to apply to specular indirect light - the glTF specification just says "indirect light". In the case of ambient occlusion, for instance, that's usually only calculated as diffuse though. For now, I'm choosing to apply this just to indirect diffuse light, and not specular.
- Modified the PBR example to use an environment map, and have labels.
- Added `FallbackImageCubemap`.
## Implementation
- IBL technique references can be found in environment_map.wgsl.
- It's more accurate to use a LUT for the scale/bias. Filament has a good reference on generating this LUT. For now, I just used an analytic approximation.
- For now, environment maps must first be prefiltered outside of bevy using a 3rd party tool. See the `EnvironmentMap` documentation.
- Eventually, we should have our own prefiltering code, so that we can have dynamically changing environment maps, as well as let users drop in an HDR image and use asset preprocessing to create the needed textures using only bevy.
---
## Changelog
- Added an `EnvironmentMapLight` camera component that adds additional ambient light to a scene.
- StandardMaterials will now appear brighter and more saturated at high roughness, due to internal material changes. This is more physically correct.
- Fixed StandardMaterial occlusion being incorrectly applied to direct lighting.
- Added `FallbackImageCubemap`.
Co-authored-by: IceSentry <c.giguere42@gmail.com>
Co-authored-by: James Liu <contact@jamessliu.com>
Co-authored-by: Rob Parrett <robparrett@gmail.com>
2023-02-09 16:46:32 +00:00
|
|
|
if key.contains(MeshPipelineKey::ENVIRONMENT_MAP) {
|
|
|
|
shader_defs.push("ENVIRONMENT_MAP".into());
|
|
|
|
}
|
|
|
|
|
Apply codebase changes in preparation for `StandardMaterial` transmission (#8704)
# Objective
- Make #8015 easier to review;
## Solution
- This commit contains changes not directly related to transmission
required by #8015, in easier-to-review, one-change-per-commit form.
---
## Changelog
### Fixed
- Clear motion vector prepass using `0.0` instead of `1.0`, to avoid TAA
artifacts on transparent objects against the background;
### Added
- The `E` mathematical constant is now available for use in shaders,
exposed under `bevy_pbr::utils`;
- A new `TAA` shader def is now available, for conditionally enabling
shader logic via `#ifdef` when TAA is enabled; (e.g. for jittering
texture samples)
- A new `FallbackImageZero` resource is introduced, for when a fallback
image filled with zeroes is required;
- A new `RenderPhase<I>::render_range()` method is introduced, for
render phases that need to render their items in multiple parceled out
“steps”;
### Changed
- The `MainTargetTextures` struct now holds both `Texture` and
`TextureViews` for the main textures;
- The fog shader functions under `bevy_pbr::fog` now take the a `Fog`
structure as their first argument, instead of relying on the global
`fog` uniform;
- The main textures can now be used as copy sources;
## Migration Guide
- `ViewTarget::main_texture()` and `ViewTarget::main_texture_other()`
now return `&Texture` instead of `&TextureView`. If you were relying on
these methods, replace your usage with
`ViewTarget::main_texture_view()`and
`ViewTarget::main_texture_other_view()`, respectively;
- `ViewTarget::sampled_main_texture()` now returns `Option<&Texture>`
instead of a `Option<&TextureView>`. If you were relying on this method,
replace your usage with `ViewTarget::sampled_main_texture_view()`;
- The `apply_fog()`, `linear_fog()`, `exponential_fog()`,
`exponential_squared_fog()` and `atmospheric_fog()` functions now take a
configurable `Fog` struct. If you were relying on them, update your
usage by adding the global `fog` uniform as their first argument;
2023-05-30 14:21:53 +00:00
|
|
|
if key.contains(MeshPipelineKey::TAA) {
|
|
|
|
shader_defs.push("TAA".into());
|
|
|
|
}
|
|
|
|
|
2023-01-19 22:11:13 +00:00
|
|
|
let format = if key.contains(MeshPipelineKey::HDR) {
|
|
|
|
ViewTarget::TEXTURE_FORMAT_HDR
|
|
|
|
} else {
|
|
|
|
TextureFormat::bevy_default()
|
2022-10-26 20:13:59 +00:00
|
|
|
};
|
|
|
|
|
2023-07-31 13:14:40 +00:00
|
|
|
// This is defined here so that custom shaders that use something other than
|
|
|
|
// the mesh binding from bevy_pbr::mesh_bindings can easily make use of this
|
|
|
|
// in their own shaders.
|
|
|
|
if let Some(per_object_buffer_batch_size) = self.per_object_buffer_batch_size {
|
|
|
|
shader_defs.push(ShaderDefVal::UInt(
|
|
|
|
"PER_OBJECT_BUFFER_BATCH_SIZE".into(),
|
|
|
|
per_object_buffer_batch_size,
|
|
|
|
));
|
|
|
|
}
|
|
|
|
|
2023-08-09 18:38:45 +00:00
|
|
|
let mut push_constant_ranges = Vec::with_capacity(1);
|
|
|
|
if cfg!(all(feature = "webgl", target_arch = "wasm32")) {
|
|
|
|
push_constant_ranges.push(PushConstantRange {
|
|
|
|
stages: ShaderStages::VERTEX,
|
|
|
|
range: 0..4,
|
|
|
|
});
|
|
|
|
}
|
|
|
|
|
Mesh vertex buffer layouts (#3959)
This PR makes a number of changes to how meshes and vertex attributes are handled, which the goal of enabling easy and flexible custom vertex attributes:
* Reworks the `Mesh` type to use the newly added `VertexAttribute` internally
* `VertexAttribute` defines the name, a unique `VertexAttributeId`, and a `VertexFormat`
* `VertexAttributeId` is used to produce consistent sort orders for vertex buffer generation, replacing the more expensive and often surprising "name based sorting"
* Meshes can be used to generate a `MeshVertexBufferLayout`, which defines the layout of the gpu buffer produced by the mesh. `MeshVertexBufferLayouts` can then be used to generate actual `VertexBufferLayouts` according to the requirements of a specific pipeline. This decoupling of "mesh layout" vs "pipeline vertex buffer layout" is what enables custom attributes. We don't need to standardize _mesh layouts_ or contort meshes to meet the needs of a specific pipeline. As long as the mesh has what the pipeline needs, it will work transparently.
* Mesh-based pipelines now specialize on `&MeshVertexBufferLayout` via the new `SpecializedMeshPipeline` trait (which behaves like `SpecializedPipeline`, but adds `&MeshVertexBufferLayout`). The integrity of the pipeline cache is maintained because the `MeshVertexBufferLayout` is treated as part of the key (which is fully abstracted from implementers of the trait ... no need to add any additional info to the specialization key).
* Hashing `MeshVertexBufferLayout` is too expensive to do for every entity, every frame. To make this scalable, I added a generalized "pre-hashing" solution to `bevy_utils`: `Hashed<T>` keys and `PreHashMap<K, V>` (which uses `Hashed<T>` internally) . Why didn't I just do the quick and dirty in-place "pre-compute hash and use that u64 as a key in a hashmap" that we've done in the past? Because its wrong! Hashes by themselves aren't enough because two different values can produce the same hash. Re-hashing a hash is even worse! I decided to build a generalized solution because this pattern has come up in the past and we've chosen to do the wrong thing. Now we can do the right thing! This did unfortunately require pulling in `hashbrown` and using that in `bevy_utils`, because avoiding re-hashes requires the `raw_entry_mut` api, which isn't stabilized yet (and may never be ... `entry_ref` has favor now, but also isn't available yet). If std's HashMap ever provides the tools we need, we can move back to that. Note that adding `hashbrown` doesn't increase our dependency count because it was already in our tree. I will probably break these changes out into their own PR.
* Specializing on `MeshVertexBufferLayout` has one non-obvious behavior: it can produce identical pipelines for two different MeshVertexBufferLayouts. To optimize the number of active pipelines / reduce re-binds while drawing, I de-duplicate pipelines post-specialization using the final `VertexBufferLayout` as the key. For example, consider a pipeline that needs the layout `(position, normal)` and is specialized using two meshes: `(position, normal, uv)` and `(position, normal, other_vec2)`. If both of these meshes result in `(position, normal)` specializations, we can use the same pipeline! Now we do. Cool!
To briefly illustrate, this is what the relevant section of `MeshPipeline`'s specialization code looks like now:
```rust
impl SpecializedMeshPipeline for MeshPipeline {
type Key = MeshPipelineKey;
fn specialize(
&self,
key: Self::Key,
layout: &MeshVertexBufferLayout,
) -> RenderPipelineDescriptor {
let mut vertex_attributes = vec![
Mesh::ATTRIBUTE_POSITION.at_shader_location(0),
Mesh::ATTRIBUTE_NORMAL.at_shader_location(1),
Mesh::ATTRIBUTE_UV_0.at_shader_location(2),
];
let mut shader_defs = Vec::new();
if layout.contains(Mesh::ATTRIBUTE_TANGENT) {
shader_defs.push(String::from("VERTEX_TANGENTS"));
vertex_attributes.push(Mesh::ATTRIBUTE_TANGENT.at_shader_location(3));
}
let vertex_buffer_layout = layout
.get_layout(&vertex_attributes)
.expect("Mesh is missing a vertex attribute");
```
Notice that this is _much_ simpler than it was before. And now any mesh with any layout can be used with this pipeline, provided it has vertex postions, normals, and uvs. We even got to remove `HAS_TANGENTS` from MeshPipelineKey and `has_tangents` from `GpuMesh`, because that information is redundant with `MeshVertexBufferLayout`.
This is still a draft because I still need to:
* Add more docs
* Experiment with adding error handling to mesh pipeline specialization (which would print errors at runtime when a mesh is missing a vertex attribute required by a pipeline). If it doesn't tank perf, we'll keep it.
* Consider breaking out the PreHash / hashbrown changes into a separate PR.
* Add an example illustrating this change
* Verify that the "mesh-specialized pipeline de-duplication code" works properly
Please dont yell at me for not doing these things yet :) Just trying to get this in peoples' hands asap.
Alternative to #3120
Fixes #3030
Co-authored-by: Carter Anderson <mcanders1@gmail.com>
2022-02-23 23:21:13 +00:00
|
|
|
Ok(RenderPipelineDescriptor {
|
Shader Imports. Decouple Mesh logic from PBR (#3137)
## Shader Imports
This adds "whole file" shader imports. These come in two flavors:
### Asset Path Imports
```rust
// /assets/shaders/custom.wgsl
#import "shaders/custom_material.wgsl"
[[stage(fragment)]]
fn fragment() -> [[location(0)]] vec4<f32> {
return get_color();
}
```
```rust
// /assets/shaders/custom_material.wgsl
[[block]]
struct CustomMaterial {
color: vec4<f32>;
};
[[group(1), binding(0)]]
var<uniform> material: CustomMaterial;
```
### Custom Path Imports
Enables defining custom import paths. These are intended to be used by crates to export shader functionality:
```rust
// bevy_pbr2/src/render/pbr.wgsl
#import bevy_pbr::mesh_view_bind_group
#import bevy_pbr::mesh_bind_group
[[block]]
struct StandardMaterial {
base_color: vec4<f32>;
emissive: vec4<f32>;
perceptual_roughness: f32;
metallic: f32;
reflectance: f32;
flags: u32;
};
/* rest of PBR fragment shader here */
```
```rust
impl Plugin for MeshRenderPlugin {
fn build(&self, app: &mut bevy_app::App) {
let mut shaders = app.world.get_resource_mut::<Assets<Shader>>().unwrap();
shaders.set_untracked(
MESH_BIND_GROUP_HANDLE,
Shader::from_wgsl(include_str!("mesh_bind_group.wgsl"))
.with_import_path("bevy_pbr::mesh_bind_group"),
);
shaders.set_untracked(
MESH_VIEW_BIND_GROUP_HANDLE,
Shader::from_wgsl(include_str!("mesh_view_bind_group.wgsl"))
.with_import_path("bevy_pbr::mesh_view_bind_group"),
);
```
By convention these should use rust-style module paths that start with the crate name. Ultimately we might enforce this convention.
Note that this feature implements _run time_ import resolution. Ultimately we should move the import logic into an asset preprocessor once Bevy gets support for that.
## Decouple Mesh Logic from PBR Logic via MeshRenderPlugin
This breaks out mesh rendering code from PBR material code, which improves the legibility of the code, decouples mesh logic from PBR logic, and opens the door for a future `MaterialPlugin<T: Material>` that handles all of the pipeline setup for arbitrary shader materials.
## Removed `RenderAsset<Shader>` in favor of extracting shaders into RenderPipelineCache
This simplifies the shader import implementation and removes the need to pass around `RenderAssets<Shader>`.
## RenderCommands are now fallible
This allows us to cleanly handle pipelines+shaders not being ready yet. We can abort a render command early in these cases, preventing bevy from trying to bind group / do draw calls for pipelines that couldn't be bound. This could also be used in the future for things like "components not existing on entities yet".
# Next Steps
* Investigate using Naga for "partial typed imports" (ex: `#import bevy_pbr::material::StandardMaterial`, which would import only the StandardMaterial struct)
* Implement `MaterialPlugin<T: Material>` for low-boilerplate custom material shaders
* Move shader import logic into the asset preprocessor once bevy gets support for that.
Fixes #3132
2021-11-18 03:45:02 +00:00
|
|
|
vertex: VertexState {
|
|
|
|
shader: MESH_SHADER_HANDLE.typed::<Shader>(),
|
|
|
|
entry_point: "vertex".into(),
|
|
|
|
shader_defs: shader_defs.clone(),
|
Mesh vertex buffer layouts (#3959)
This PR makes a number of changes to how meshes and vertex attributes are handled, which the goal of enabling easy and flexible custom vertex attributes:
* Reworks the `Mesh` type to use the newly added `VertexAttribute` internally
* `VertexAttribute` defines the name, a unique `VertexAttributeId`, and a `VertexFormat`
* `VertexAttributeId` is used to produce consistent sort orders for vertex buffer generation, replacing the more expensive and often surprising "name based sorting"
* Meshes can be used to generate a `MeshVertexBufferLayout`, which defines the layout of the gpu buffer produced by the mesh. `MeshVertexBufferLayouts` can then be used to generate actual `VertexBufferLayouts` according to the requirements of a specific pipeline. This decoupling of "mesh layout" vs "pipeline vertex buffer layout" is what enables custom attributes. We don't need to standardize _mesh layouts_ or contort meshes to meet the needs of a specific pipeline. As long as the mesh has what the pipeline needs, it will work transparently.
* Mesh-based pipelines now specialize on `&MeshVertexBufferLayout` via the new `SpecializedMeshPipeline` trait (which behaves like `SpecializedPipeline`, but adds `&MeshVertexBufferLayout`). The integrity of the pipeline cache is maintained because the `MeshVertexBufferLayout` is treated as part of the key (which is fully abstracted from implementers of the trait ... no need to add any additional info to the specialization key).
* Hashing `MeshVertexBufferLayout` is too expensive to do for every entity, every frame. To make this scalable, I added a generalized "pre-hashing" solution to `bevy_utils`: `Hashed<T>` keys and `PreHashMap<K, V>` (which uses `Hashed<T>` internally) . Why didn't I just do the quick and dirty in-place "pre-compute hash and use that u64 as a key in a hashmap" that we've done in the past? Because its wrong! Hashes by themselves aren't enough because two different values can produce the same hash. Re-hashing a hash is even worse! I decided to build a generalized solution because this pattern has come up in the past and we've chosen to do the wrong thing. Now we can do the right thing! This did unfortunately require pulling in `hashbrown` and using that in `bevy_utils`, because avoiding re-hashes requires the `raw_entry_mut` api, which isn't stabilized yet (and may never be ... `entry_ref` has favor now, but also isn't available yet). If std's HashMap ever provides the tools we need, we can move back to that. Note that adding `hashbrown` doesn't increase our dependency count because it was already in our tree. I will probably break these changes out into their own PR.
* Specializing on `MeshVertexBufferLayout` has one non-obvious behavior: it can produce identical pipelines for two different MeshVertexBufferLayouts. To optimize the number of active pipelines / reduce re-binds while drawing, I de-duplicate pipelines post-specialization using the final `VertexBufferLayout` as the key. For example, consider a pipeline that needs the layout `(position, normal)` and is specialized using two meshes: `(position, normal, uv)` and `(position, normal, other_vec2)`. If both of these meshes result in `(position, normal)` specializations, we can use the same pipeline! Now we do. Cool!
To briefly illustrate, this is what the relevant section of `MeshPipeline`'s specialization code looks like now:
```rust
impl SpecializedMeshPipeline for MeshPipeline {
type Key = MeshPipelineKey;
fn specialize(
&self,
key: Self::Key,
layout: &MeshVertexBufferLayout,
) -> RenderPipelineDescriptor {
let mut vertex_attributes = vec![
Mesh::ATTRIBUTE_POSITION.at_shader_location(0),
Mesh::ATTRIBUTE_NORMAL.at_shader_location(1),
Mesh::ATTRIBUTE_UV_0.at_shader_location(2),
];
let mut shader_defs = Vec::new();
if layout.contains(Mesh::ATTRIBUTE_TANGENT) {
shader_defs.push(String::from("VERTEX_TANGENTS"));
vertex_attributes.push(Mesh::ATTRIBUTE_TANGENT.at_shader_location(3));
}
let vertex_buffer_layout = layout
.get_layout(&vertex_attributes)
.expect("Mesh is missing a vertex attribute");
```
Notice that this is _much_ simpler than it was before. And now any mesh with any layout can be used with this pipeline, provided it has vertex postions, normals, and uvs. We even got to remove `HAS_TANGENTS` from MeshPipelineKey and `has_tangents` from `GpuMesh`, because that information is redundant with `MeshVertexBufferLayout`.
This is still a draft because I still need to:
* Add more docs
* Experiment with adding error handling to mesh pipeline specialization (which would print errors at runtime when a mesh is missing a vertex attribute required by a pipeline). If it doesn't tank perf, we'll keep it.
* Consider breaking out the PreHash / hashbrown changes into a separate PR.
* Add an example illustrating this change
* Verify that the "mesh-specialized pipeline de-duplication code" works properly
Please dont yell at me for not doing these things yet :) Just trying to get this in peoples' hands asap.
Alternative to #3120
Fixes #3030
Co-authored-by: Carter Anderson <mcanders1@gmail.com>
2022-02-23 23:21:13 +00:00
|
|
|
buffers: vec![vertex_buffer_layout],
|
Shader Imports. Decouple Mesh logic from PBR (#3137)
## Shader Imports
This adds "whole file" shader imports. These come in two flavors:
### Asset Path Imports
```rust
// /assets/shaders/custom.wgsl
#import "shaders/custom_material.wgsl"
[[stage(fragment)]]
fn fragment() -> [[location(0)]] vec4<f32> {
return get_color();
}
```
```rust
// /assets/shaders/custom_material.wgsl
[[block]]
struct CustomMaterial {
color: vec4<f32>;
};
[[group(1), binding(0)]]
var<uniform> material: CustomMaterial;
```
### Custom Path Imports
Enables defining custom import paths. These are intended to be used by crates to export shader functionality:
```rust
// bevy_pbr2/src/render/pbr.wgsl
#import bevy_pbr::mesh_view_bind_group
#import bevy_pbr::mesh_bind_group
[[block]]
struct StandardMaterial {
base_color: vec4<f32>;
emissive: vec4<f32>;
perceptual_roughness: f32;
metallic: f32;
reflectance: f32;
flags: u32;
};
/* rest of PBR fragment shader here */
```
```rust
impl Plugin for MeshRenderPlugin {
fn build(&self, app: &mut bevy_app::App) {
let mut shaders = app.world.get_resource_mut::<Assets<Shader>>().unwrap();
shaders.set_untracked(
MESH_BIND_GROUP_HANDLE,
Shader::from_wgsl(include_str!("mesh_bind_group.wgsl"))
.with_import_path("bevy_pbr::mesh_bind_group"),
);
shaders.set_untracked(
MESH_VIEW_BIND_GROUP_HANDLE,
Shader::from_wgsl(include_str!("mesh_view_bind_group.wgsl"))
.with_import_path("bevy_pbr::mesh_view_bind_group"),
);
```
By convention these should use rust-style module paths that start with the crate name. Ultimately we might enforce this convention.
Note that this feature implements _run time_ import resolution. Ultimately we should move the import logic into an asset preprocessor once Bevy gets support for that.
## Decouple Mesh Logic from PBR Logic via MeshRenderPlugin
This breaks out mesh rendering code from PBR material code, which improves the legibility of the code, decouples mesh logic from PBR logic, and opens the door for a future `MaterialPlugin<T: Material>` that handles all of the pipeline setup for arbitrary shader materials.
## Removed `RenderAsset<Shader>` in favor of extracting shaders into RenderPipelineCache
This simplifies the shader import implementation and removes the need to pass around `RenderAssets<Shader>`.
## RenderCommands are now fallible
This allows us to cleanly handle pipelines+shaders not being ready yet. We can abort a render command early in these cases, preventing bevy from trying to bind group / do draw calls for pipelines that couldn't be bound. This could also be used in the future for things like "components not existing on entities yet".
# Next Steps
* Investigate using Naga for "partial typed imports" (ex: `#import bevy_pbr::material::StandardMaterial`, which would import only the StandardMaterial struct)
* Implement `MaterialPlugin<T: Material>` for low-boilerplate custom material shaders
* Move shader import logic into the asset preprocessor once bevy gets support for that.
Fixes #3132
2021-11-18 03:45:02 +00:00
|
|
|
},
|
|
|
|
fragment: Some(FragmentState {
|
|
|
|
shader: MESH_SHADER_HANDLE.typed::<Shader>(),
|
|
|
|
shader_defs,
|
|
|
|
entry_point: "fragment".into(),
|
2022-07-14 21:17:16 +00:00
|
|
|
targets: vec![Some(ColorTargetState {
|
2022-10-26 20:13:59 +00:00
|
|
|
format,
|
Shader Imports. Decouple Mesh logic from PBR (#3137)
## Shader Imports
This adds "whole file" shader imports. These come in two flavors:
### Asset Path Imports
```rust
// /assets/shaders/custom.wgsl
#import "shaders/custom_material.wgsl"
[[stage(fragment)]]
fn fragment() -> [[location(0)]] vec4<f32> {
return get_color();
}
```
```rust
// /assets/shaders/custom_material.wgsl
[[block]]
struct CustomMaterial {
color: vec4<f32>;
};
[[group(1), binding(0)]]
var<uniform> material: CustomMaterial;
```
### Custom Path Imports
Enables defining custom import paths. These are intended to be used by crates to export shader functionality:
```rust
// bevy_pbr2/src/render/pbr.wgsl
#import bevy_pbr::mesh_view_bind_group
#import bevy_pbr::mesh_bind_group
[[block]]
struct StandardMaterial {
base_color: vec4<f32>;
emissive: vec4<f32>;
perceptual_roughness: f32;
metallic: f32;
reflectance: f32;
flags: u32;
};
/* rest of PBR fragment shader here */
```
```rust
impl Plugin for MeshRenderPlugin {
fn build(&self, app: &mut bevy_app::App) {
let mut shaders = app.world.get_resource_mut::<Assets<Shader>>().unwrap();
shaders.set_untracked(
MESH_BIND_GROUP_HANDLE,
Shader::from_wgsl(include_str!("mesh_bind_group.wgsl"))
.with_import_path("bevy_pbr::mesh_bind_group"),
);
shaders.set_untracked(
MESH_VIEW_BIND_GROUP_HANDLE,
Shader::from_wgsl(include_str!("mesh_view_bind_group.wgsl"))
.with_import_path("bevy_pbr::mesh_view_bind_group"),
);
```
By convention these should use rust-style module paths that start with the crate name. Ultimately we might enforce this convention.
Note that this feature implements _run time_ import resolution. Ultimately we should move the import logic into an asset preprocessor once Bevy gets support for that.
## Decouple Mesh Logic from PBR Logic via MeshRenderPlugin
This breaks out mesh rendering code from PBR material code, which improves the legibility of the code, decouples mesh logic from PBR logic, and opens the door for a future `MaterialPlugin<T: Material>` that handles all of the pipeline setup for arbitrary shader materials.
## Removed `RenderAsset<Shader>` in favor of extracting shaders into RenderPipelineCache
This simplifies the shader import implementation and removes the need to pass around `RenderAssets<Shader>`.
## RenderCommands are now fallible
This allows us to cleanly handle pipelines+shaders not being ready yet. We can abort a render command early in these cases, preventing bevy from trying to bind group / do draw calls for pipelines that couldn't be bound. This could also be used in the future for things like "components not existing on entities yet".
# Next Steps
* Investigate using Naga for "partial typed imports" (ex: `#import bevy_pbr::material::StandardMaterial`, which would import only the StandardMaterial struct)
* Implement `MaterialPlugin<T: Material>` for low-boilerplate custom material shaders
* Move shader import logic into the asset preprocessor once bevy gets support for that.
Fixes #3132
2021-11-18 03:45:02 +00:00
|
|
|
blend,
|
|
|
|
write_mask: ColorWrites::ALL,
|
2022-07-14 21:17:16 +00:00
|
|
|
})],
|
Shader Imports. Decouple Mesh logic from PBR (#3137)
## Shader Imports
This adds "whole file" shader imports. These come in two flavors:
### Asset Path Imports
```rust
// /assets/shaders/custom.wgsl
#import "shaders/custom_material.wgsl"
[[stage(fragment)]]
fn fragment() -> [[location(0)]] vec4<f32> {
return get_color();
}
```
```rust
// /assets/shaders/custom_material.wgsl
[[block]]
struct CustomMaterial {
color: vec4<f32>;
};
[[group(1), binding(0)]]
var<uniform> material: CustomMaterial;
```
### Custom Path Imports
Enables defining custom import paths. These are intended to be used by crates to export shader functionality:
```rust
// bevy_pbr2/src/render/pbr.wgsl
#import bevy_pbr::mesh_view_bind_group
#import bevy_pbr::mesh_bind_group
[[block]]
struct StandardMaterial {
base_color: vec4<f32>;
emissive: vec4<f32>;
perceptual_roughness: f32;
metallic: f32;
reflectance: f32;
flags: u32;
};
/* rest of PBR fragment shader here */
```
```rust
impl Plugin for MeshRenderPlugin {
fn build(&self, app: &mut bevy_app::App) {
let mut shaders = app.world.get_resource_mut::<Assets<Shader>>().unwrap();
shaders.set_untracked(
MESH_BIND_GROUP_HANDLE,
Shader::from_wgsl(include_str!("mesh_bind_group.wgsl"))
.with_import_path("bevy_pbr::mesh_bind_group"),
);
shaders.set_untracked(
MESH_VIEW_BIND_GROUP_HANDLE,
Shader::from_wgsl(include_str!("mesh_view_bind_group.wgsl"))
.with_import_path("bevy_pbr::mesh_view_bind_group"),
);
```
By convention these should use rust-style module paths that start with the crate name. Ultimately we might enforce this convention.
Note that this feature implements _run time_ import resolution. Ultimately we should move the import logic into an asset preprocessor once Bevy gets support for that.
## Decouple Mesh Logic from PBR Logic via MeshRenderPlugin
This breaks out mesh rendering code from PBR material code, which improves the legibility of the code, decouples mesh logic from PBR logic, and opens the door for a future `MaterialPlugin<T: Material>` that handles all of the pipeline setup for arbitrary shader materials.
## Removed `RenderAsset<Shader>` in favor of extracting shaders into RenderPipelineCache
This simplifies the shader import implementation and removes the need to pass around `RenderAssets<Shader>`.
## RenderCommands are now fallible
This allows us to cleanly handle pipelines+shaders not being ready yet. We can abort a render command early in these cases, preventing bevy from trying to bind group / do draw calls for pipelines that couldn't be bound. This could also be used in the future for things like "components not existing on entities yet".
# Next Steps
* Investigate using Naga for "partial typed imports" (ex: `#import bevy_pbr::material::StandardMaterial`, which would import only the StandardMaterial struct)
* Implement `MaterialPlugin<T: Material>` for low-boilerplate custom material shaders
* Move shader import logic into the asset preprocessor once bevy gets support for that.
Fixes #3132
2021-11-18 03:45:02 +00:00
|
|
|
}),
|
2023-02-17 06:20:16 +00:00
|
|
|
layout: bind_group_layout,
|
2023-08-09 18:38:45 +00:00
|
|
|
push_constant_ranges,
|
Shader Imports. Decouple Mesh logic from PBR (#3137)
## Shader Imports
This adds "whole file" shader imports. These come in two flavors:
### Asset Path Imports
```rust
// /assets/shaders/custom.wgsl
#import "shaders/custom_material.wgsl"
[[stage(fragment)]]
fn fragment() -> [[location(0)]] vec4<f32> {
return get_color();
}
```
```rust
// /assets/shaders/custom_material.wgsl
[[block]]
struct CustomMaterial {
color: vec4<f32>;
};
[[group(1), binding(0)]]
var<uniform> material: CustomMaterial;
```
### Custom Path Imports
Enables defining custom import paths. These are intended to be used by crates to export shader functionality:
```rust
// bevy_pbr2/src/render/pbr.wgsl
#import bevy_pbr::mesh_view_bind_group
#import bevy_pbr::mesh_bind_group
[[block]]
struct StandardMaterial {
base_color: vec4<f32>;
emissive: vec4<f32>;
perceptual_roughness: f32;
metallic: f32;
reflectance: f32;
flags: u32;
};
/* rest of PBR fragment shader here */
```
```rust
impl Plugin for MeshRenderPlugin {
fn build(&self, app: &mut bevy_app::App) {
let mut shaders = app.world.get_resource_mut::<Assets<Shader>>().unwrap();
shaders.set_untracked(
MESH_BIND_GROUP_HANDLE,
Shader::from_wgsl(include_str!("mesh_bind_group.wgsl"))
.with_import_path("bevy_pbr::mesh_bind_group"),
);
shaders.set_untracked(
MESH_VIEW_BIND_GROUP_HANDLE,
Shader::from_wgsl(include_str!("mesh_view_bind_group.wgsl"))
.with_import_path("bevy_pbr::mesh_view_bind_group"),
);
```
By convention these should use rust-style module paths that start with the crate name. Ultimately we might enforce this convention.
Note that this feature implements _run time_ import resolution. Ultimately we should move the import logic into an asset preprocessor once Bevy gets support for that.
## Decouple Mesh Logic from PBR Logic via MeshRenderPlugin
This breaks out mesh rendering code from PBR material code, which improves the legibility of the code, decouples mesh logic from PBR logic, and opens the door for a future `MaterialPlugin<T: Material>` that handles all of the pipeline setup for arbitrary shader materials.
## Removed `RenderAsset<Shader>` in favor of extracting shaders into RenderPipelineCache
This simplifies the shader import implementation and removes the need to pass around `RenderAssets<Shader>`.
## RenderCommands are now fallible
This allows us to cleanly handle pipelines+shaders not being ready yet. We can abort a render command early in these cases, preventing bevy from trying to bind group / do draw calls for pipelines that couldn't be bound. This could also be used in the future for things like "components not existing on entities yet".
# Next Steps
* Investigate using Naga for "partial typed imports" (ex: `#import bevy_pbr::material::StandardMaterial`, which would import only the StandardMaterial struct)
* Implement `MaterialPlugin<T: Material>` for low-boilerplate custom material shaders
* Move shader import logic into the asset preprocessor once bevy gets support for that.
Fixes #3132
2021-11-18 03:45:02 +00:00
|
|
|
primitive: PrimitiveState {
|
|
|
|
front_face: FrontFace::Ccw,
|
|
|
|
cull_mode: Some(Face::Back),
|
2021-12-19 03:03:06 +00:00
|
|
|
unclipped_depth: false,
|
Shader Imports. Decouple Mesh logic from PBR (#3137)
## Shader Imports
This adds "whole file" shader imports. These come in two flavors:
### Asset Path Imports
```rust
// /assets/shaders/custom.wgsl
#import "shaders/custom_material.wgsl"
[[stage(fragment)]]
fn fragment() -> [[location(0)]] vec4<f32> {
return get_color();
}
```
```rust
// /assets/shaders/custom_material.wgsl
[[block]]
struct CustomMaterial {
color: vec4<f32>;
};
[[group(1), binding(0)]]
var<uniform> material: CustomMaterial;
```
### Custom Path Imports
Enables defining custom import paths. These are intended to be used by crates to export shader functionality:
```rust
// bevy_pbr2/src/render/pbr.wgsl
#import bevy_pbr::mesh_view_bind_group
#import bevy_pbr::mesh_bind_group
[[block]]
struct StandardMaterial {
base_color: vec4<f32>;
emissive: vec4<f32>;
perceptual_roughness: f32;
metallic: f32;
reflectance: f32;
flags: u32;
};
/* rest of PBR fragment shader here */
```
```rust
impl Plugin for MeshRenderPlugin {
fn build(&self, app: &mut bevy_app::App) {
let mut shaders = app.world.get_resource_mut::<Assets<Shader>>().unwrap();
shaders.set_untracked(
MESH_BIND_GROUP_HANDLE,
Shader::from_wgsl(include_str!("mesh_bind_group.wgsl"))
.with_import_path("bevy_pbr::mesh_bind_group"),
);
shaders.set_untracked(
MESH_VIEW_BIND_GROUP_HANDLE,
Shader::from_wgsl(include_str!("mesh_view_bind_group.wgsl"))
.with_import_path("bevy_pbr::mesh_view_bind_group"),
);
```
By convention these should use rust-style module paths that start with the crate name. Ultimately we might enforce this convention.
Note that this feature implements _run time_ import resolution. Ultimately we should move the import logic into an asset preprocessor once Bevy gets support for that.
## Decouple Mesh Logic from PBR Logic via MeshRenderPlugin
This breaks out mesh rendering code from PBR material code, which improves the legibility of the code, decouples mesh logic from PBR logic, and opens the door for a future `MaterialPlugin<T: Material>` that handles all of the pipeline setup for arbitrary shader materials.
## Removed `RenderAsset<Shader>` in favor of extracting shaders into RenderPipelineCache
This simplifies the shader import implementation and removes the need to pass around `RenderAssets<Shader>`.
## RenderCommands are now fallible
This allows us to cleanly handle pipelines+shaders not being ready yet. We can abort a render command early in these cases, preventing bevy from trying to bind group / do draw calls for pipelines that couldn't be bound. This could also be used in the future for things like "components not existing on entities yet".
# Next Steps
* Investigate using Naga for "partial typed imports" (ex: `#import bevy_pbr::material::StandardMaterial`, which would import only the StandardMaterial struct)
* Implement `MaterialPlugin<T: Material>` for low-boilerplate custom material shaders
* Move shader import logic into the asset preprocessor once bevy gets support for that.
Fixes #3132
2021-11-18 03:45:02 +00:00
|
|
|
polygon_mode: PolygonMode::Fill,
|
|
|
|
conservative: false,
|
2021-12-18 20:55:40 +00:00
|
|
|
topology: key.primitive_topology(),
|
Shader Imports. Decouple Mesh logic from PBR (#3137)
## Shader Imports
This adds "whole file" shader imports. These come in two flavors:
### Asset Path Imports
```rust
// /assets/shaders/custom.wgsl
#import "shaders/custom_material.wgsl"
[[stage(fragment)]]
fn fragment() -> [[location(0)]] vec4<f32> {
return get_color();
}
```
```rust
// /assets/shaders/custom_material.wgsl
[[block]]
struct CustomMaterial {
color: vec4<f32>;
};
[[group(1), binding(0)]]
var<uniform> material: CustomMaterial;
```
### Custom Path Imports
Enables defining custom import paths. These are intended to be used by crates to export shader functionality:
```rust
// bevy_pbr2/src/render/pbr.wgsl
#import bevy_pbr::mesh_view_bind_group
#import bevy_pbr::mesh_bind_group
[[block]]
struct StandardMaterial {
base_color: vec4<f32>;
emissive: vec4<f32>;
perceptual_roughness: f32;
metallic: f32;
reflectance: f32;
flags: u32;
};
/* rest of PBR fragment shader here */
```
```rust
impl Plugin for MeshRenderPlugin {
fn build(&self, app: &mut bevy_app::App) {
let mut shaders = app.world.get_resource_mut::<Assets<Shader>>().unwrap();
shaders.set_untracked(
MESH_BIND_GROUP_HANDLE,
Shader::from_wgsl(include_str!("mesh_bind_group.wgsl"))
.with_import_path("bevy_pbr::mesh_bind_group"),
);
shaders.set_untracked(
MESH_VIEW_BIND_GROUP_HANDLE,
Shader::from_wgsl(include_str!("mesh_view_bind_group.wgsl"))
.with_import_path("bevy_pbr::mesh_view_bind_group"),
);
```
By convention these should use rust-style module paths that start with the crate name. Ultimately we might enforce this convention.
Note that this feature implements _run time_ import resolution. Ultimately we should move the import logic into an asset preprocessor once Bevy gets support for that.
## Decouple Mesh Logic from PBR Logic via MeshRenderPlugin
This breaks out mesh rendering code from PBR material code, which improves the legibility of the code, decouples mesh logic from PBR logic, and opens the door for a future `MaterialPlugin<T: Material>` that handles all of the pipeline setup for arbitrary shader materials.
## Removed `RenderAsset<Shader>` in favor of extracting shaders into RenderPipelineCache
This simplifies the shader import implementation and removes the need to pass around `RenderAssets<Shader>`.
## RenderCommands are now fallible
This allows us to cleanly handle pipelines+shaders not being ready yet. We can abort a render command early in these cases, preventing bevy from trying to bind group / do draw calls for pipelines that couldn't be bound. This could also be used in the future for things like "components not existing on entities yet".
# Next Steps
* Investigate using Naga for "partial typed imports" (ex: `#import bevy_pbr::material::StandardMaterial`, which would import only the StandardMaterial struct)
* Implement `MaterialPlugin<T: Material>` for low-boilerplate custom material shaders
* Move shader import logic into the asset preprocessor once bevy gets support for that.
Fixes #3132
2021-11-18 03:45:02 +00:00
|
|
|
strip_index_format: None,
|
|
|
|
},
|
|
|
|
depth_stencil: Some(DepthStencilState {
|
|
|
|
format: TextureFormat::Depth32Float,
|
|
|
|
depth_write_enabled,
|
2023-01-19 22:11:13 +00:00
|
|
|
depth_compare: CompareFunction::GreaterEqual,
|
Shader Imports. Decouple Mesh logic from PBR (#3137)
## Shader Imports
This adds "whole file" shader imports. These come in two flavors:
### Asset Path Imports
```rust
// /assets/shaders/custom.wgsl
#import "shaders/custom_material.wgsl"
[[stage(fragment)]]
fn fragment() -> [[location(0)]] vec4<f32> {
return get_color();
}
```
```rust
// /assets/shaders/custom_material.wgsl
[[block]]
struct CustomMaterial {
color: vec4<f32>;
};
[[group(1), binding(0)]]
var<uniform> material: CustomMaterial;
```
### Custom Path Imports
Enables defining custom import paths. These are intended to be used by crates to export shader functionality:
```rust
// bevy_pbr2/src/render/pbr.wgsl
#import bevy_pbr::mesh_view_bind_group
#import bevy_pbr::mesh_bind_group
[[block]]
struct StandardMaterial {
base_color: vec4<f32>;
emissive: vec4<f32>;
perceptual_roughness: f32;
metallic: f32;
reflectance: f32;
flags: u32;
};
/* rest of PBR fragment shader here */
```
```rust
impl Plugin for MeshRenderPlugin {
fn build(&self, app: &mut bevy_app::App) {
let mut shaders = app.world.get_resource_mut::<Assets<Shader>>().unwrap();
shaders.set_untracked(
MESH_BIND_GROUP_HANDLE,
Shader::from_wgsl(include_str!("mesh_bind_group.wgsl"))
.with_import_path("bevy_pbr::mesh_bind_group"),
);
shaders.set_untracked(
MESH_VIEW_BIND_GROUP_HANDLE,
Shader::from_wgsl(include_str!("mesh_view_bind_group.wgsl"))
.with_import_path("bevy_pbr::mesh_view_bind_group"),
);
```
By convention these should use rust-style module paths that start with the crate name. Ultimately we might enforce this convention.
Note that this feature implements _run time_ import resolution. Ultimately we should move the import logic into an asset preprocessor once Bevy gets support for that.
## Decouple Mesh Logic from PBR Logic via MeshRenderPlugin
This breaks out mesh rendering code from PBR material code, which improves the legibility of the code, decouples mesh logic from PBR logic, and opens the door for a future `MaterialPlugin<T: Material>` that handles all of the pipeline setup for arbitrary shader materials.
## Removed `RenderAsset<Shader>` in favor of extracting shaders into RenderPipelineCache
This simplifies the shader import implementation and removes the need to pass around `RenderAssets<Shader>`.
## RenderCommands are now fallible
This allows us to cleanly handle pipelines+shaders not being ready yet. We can abort a render command early in these cases, preventing bevy from trying to bind group / do draw calls for pipelines that couldn't be bound. This could also be used in the future for things like "components not existing on entities yet".
# Next Steps
* Investigate using Naga for "partial typed imports" (ex: `#import bevy_pbr::material::StandardMaterial`, which would import only the StandardMaterial struct)
* Implement `MaterialPlugin<T: Material>` for low-boilerplate custom material shaders
* Move shader import logic into the asset preprocessor once bevy gets support for that.
Fixes #3132
2021-11-18 03:45:02 +00:00
|
|
|
stencil: StencilState {
|
|
|
|
front: StencilFaceState::IGNORE,
|
|
|
|
back: StencilFaceState::IGNORE,
|
|
|
|
read_mask: 0,
|
|
|
|
write_mask: 0,
|
|
|
|
},
|
|
|
|
bias: DepthBiasState {
|
|
|
|
constant: 0,
|
|
|
|
slope_scale: 0.0,
|
|
|
|
clamp: 0.0,
|
|
|
|
},
|
|
|
|
}),
|
|
|
|
multisample: MultisampleState {
|
|
|
|
count: key.msaa_samples(),
|
|
|
|
mask: !0,
|
|
|
|
alpha_to_coverage_enabled: false,
|
|
|
|
},
|
|
|
|
label: Some(label),
|
Mesh vertex buffer layouts (#3959)
This PR makes a number of changes to how meshes and vertex attributes are handled, which the goal of enabling easy and flexible custom vertex attributes:
* Reworks the `Mesh` type to use the newly added `VertexAttribute` internally
* `VertexAttribute` defines the name, a unique `VertexAttributeId`, and a `VertexFormat`
* `VertexAttributeId` is used to produce consistent sort orders for vertex buffer generation, replacing the more expensive and often surprising "name based sorting"
* Meshes can be used to generate a `MeshVertexBufferLayout`, which defines the layout of the gpu buffer produced by the mesh. `MeshVertexBufferLayouts` can then be used to generate actual `VertexBufferLayouts` according to the requirements of a specific pipeline. This decoupling of "mesh layout" vs "pipeline vertex buffer layout" is what enables custom attributes. We don't need to standardize _mesh layouts_ or contort meshes to meet the needs of a specific pipeline. As long as the mesh has what the pipeline needs, it will work transparently.
* Mesh-based pipelines now specialize on `&MeshVertexBufferLayout` via the new `SpecializedMeshPipeline` trait (which behaves like `SpecializedPipeline`, but adds `&MeshVertexBufferLayout`). The integrity of the pipeline cache is maintained because the `MeshVertexBufferLayout` is treated as part of the key (which is fully abstracted from implementers of the trait ... no need to add any additional info to the specialization key).
* Hashing `MeshVertexBufferLayout` is too expensive to do for every entity, every frame. To make this scalable, I added a generalized "pre-hashing" solution to `bevy_utils`: `Hashed<T>` keys and `PreHashMap<K, V>` (which uses `Hashed<T>` internally) . Why didn't I just do the quick and dirty in-place "pre-compute hash and use that u64 as a key in a hashmap" that we've done in the past? Because its wrong! Hashes by themselves aren't enough because two different values can produce the same hash. Re-hashing a hash is even worse! I decided to build a generalized solution because this pattern has come up in the past and we've chosen to do the wrong thing. Now we can do the right thing! This did unfortunately require pulling in `hashbrown` and using that in `bevy_utils`, because avoiding re-hashes requires the `raw_entry_mut` api, which isn't stabilized yet (and may never be ... `entry_ref` has favor now, but also isn't available yet). If std's HashMap ever provides the tools we need, we can move back to that. Note that adding `hashbrown` doesn't increase our dependency count because it was already in our tree. I will probably break these changes out into their own PR.
* Specializing on `MeshVertexBufferLayout` has one non-obvious behavior: it can produce identical pipelines for two different MeshVertexBufferLayouts. To optimize the number of active pipelines / reduce re-binds while drawing, I de-duplicate pipelines post-specialization using the final `VertexBufferLayout` as the key. For example, consider a pipeline that needs the layout `(position, normal)` and is specialized using two meshes: `(position, normal, uv)` and `(position, normal, other_vec2)`. If both of these meshes result in `(position, normal)` specializations, we can use the same pipeline! Now we do. Cool!
To briefly illustrate, this is what the relevant section of `MeshPipeline`'s specialization code looks like now:
```rust
impl SpecializedMeshPipeline for MeshPipeline {
type Key = MeshPipelineKey;
fn specialize(
&self,
key: Self::Key,
layout: &MeshVertexBufferLayout,
) -> RenderPipelineDescriptor {
let mut vertex_attributes = vec![
Mesh::ATTRIBUTE_POSITION.at_shader_location(0),
Mesh::ATTRIBUTE_NORMAL.at_shader_location(1),
Mesh::ATTRIBUTE_UV_0.at_shader_location(2),
];
let mut shader_defs = Vec::new();
if layout.contains(Mesh::ATTRIBUTE_TANGENT) {
shader_defs.push(String::from("VERTEX_TANGENTS"));
vertex_attributes.push(Mesh::ATTRIBUTE_TANGENT.at_shader_location(3));
}
let vertex_buffer_layout = layout
.get_layout(&vertex_attributes)
.expect("Mesh is missing a vertex attribute");
```
Notice that this is _much_ simpler than it was before. And now any mesh with any layout can be used with this pipeline, provided it has vertex postions, normals, and uvs. We even got to remove `HAS_TANGENTS` from MeshPipelineKey and `has_tangents` from `GpuMesh`, because that information is redundant with `MeshVertexBufferLayout`.
This is still a draft because I still need to:
* Add more docs
* Experiment with adding error handling to mesh pipeline specialization (which would print errors at runtime when a mesh is missing a vertex attribute required by a pipeline). If it doesn't tank perf, we'll keep it.
* Consider breaking out the PreHash / hashbrown changes into a separate PR.
* Add an example illustrating this change
* Verify that the "mesh-specialized pipeline de-duplication code" works properly
Please dont yell at me for not doing these things yet :) Just trying to get this in peoples' hands asap.
Alternative to #3120
Fixes #3030
Co-authored-by: Carter Anderson <mcanders1@gmail.com>
2022-02-23 23:21:13 +00:00
|
|
|
})
|
Shader Imports. Decouple Mesh logic from PBR (#3137)
## Shader Imports
This adds "whole file" shader imports. These come in two flavors:
### Asset Path Imports
```rust
// /assets/shaders/custom.wgsl
#import "shaders/custom_material.wgsl"
[[stage(fragment)]]
fn fragment() -> [[location(0)]] vec4<f32> {
return get_color();
}
```
```rust
// /assets/shaders/custom_material.wgsl
[[block]]
struct CustomMaterial {
color: vec4<f32>;
};
[[group(1), binding(0)]]
var<uniform> material: CustomMaterial;
```
### Custom Path Imports
Enables defining custom import paths. These are intended to be used by crates to export shader functionality:
```rust
// bevy_pbr2/src/render/pbr.wgsl
#import bevy_pbr::mesh_view_bind_group
#import bevy_pbr::mesh_bind_group
[[block]]
struct StandardMaterial {
base_color: vec4<f32>;
emissive: vec4<f32>;
perceptual_roughness: f32;
metallic: f32;
reflectance: f32;
flags: u32;
};
/* rest of PBR fragment shader here */
```
```rust
impl Plugin for MeshRenderPlugin {
fn build(&self, app: &mut bevy_app::App) {
let mut shaders = app.world.get_resource_mut::<Assets<Shader>>().unwrap();
shaders.set_untracked(
MESH_BIND_GROUP_HANDLE,
Shader::from_wgsl(include_str!("mesh_bind_group.wgsl"))
.with_import_path("bevy_pbr::mesh_bind_group"),
);
shaders.set_untracked(
MESH_VIEW_BIND_GROUP_HANDLE,
Shader::from_wgsl(include_str!("mesh_view_bind_group.wgsl"))
.with_import_path("bevy_pbr::mesh_view_bind_group"),
);
```
By convention these should use rust-style module paths that start with the crate name. Ultimately we might enforce this convention.
Note that this feature implements _run time_ import resolution. Ultimately we should move the import logic into an asset preprocessor once Bevy gets support for that.
## Decouple Mesh Logic from PBR Logic via MeshRenderPlugin
This breaks out mesh rendering code from PBR material code, which improves the legibility of the code, decouples mesh logic from PBR logic, and opens the door for a future `MaterialPlugin<T: Material>` that handles all of the pipeline setup for arbitrary shader materials.
## Removed `RenderAsset<Shader>` in favor of extracting shaders into RenderPipelineCache
This simplifies the shader import implementation and removes the need to pass around `RenderAssets<Shader>`.
## RenderCommands are now fallible
This allows us to cleanly handle pipelines+shaders not being ready yet. We can abort a render command early in these cases, preventing bevy from trying to bind group / do draw calls for pipelines that couldn't be bound. This could also be used in the future for things like "components not existing on entities yet".
# Next Steps
* Investigate using Naga for "partial typed imports" (ex: `#import bevy_pbr::material::StandardMaterial`, which would import only the StandardMaterial struct)
* Implement `MaterialPlugin<T: Material>` for low-boilerplate custom material shaders
* Move shader import logic into the asset preprocessor once bevy gets support for that.
Fixes #3132
2021-11-18 03:45:02 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
Add morph targets (#8158)
# Objective
- Add morph targets to `bevy_pbr` (closes #5756) & load them from glTF
- Supersedes #3722
- Fixes #6814
[Morph targets][1] (also known as shape interpolation, shape keys, or
blend shapes) allow animating individual vertices with fine grained
controls. This is typically used for facial expressions. By specifying
multiple poses as vertex offset, and providing a set of weight of each
pose, it is possible to define surprisingly realistic transitions
between poses. Blending between multiple poses also allow composition.
Morph targets are part of the [gltf standard][2] and are a feature of
Unity and Unreal, and babylone.js, it is only natural to implement them
in bevy.
## Solution
This implementation of morph targets uses a 3d texture where each pixel
is a component of an animated attribute. Each layer is a different
target. We use a 2d texture for each target, because the number of
attribute×components×animated vertices is expected to always exceed the
maximum pixel row size limit of webGL2. It copies fairly closely the way
skinning is implemented on the CPU side, while on the GPU side, the
shader morph target implementation is a relatively trivial detail.
We add an optional `morph_texture` to the `Mesh` struct. The
`morph_texture` is built through a method that accepts an iterator over
attribute buffers.
The `MorphWeights` component, user-accessible, controls the blend of
poses used by mesh instances (so that multiple copy of the same mesh may
have different weights), all the weights are uploaded to a uniform
buffer of 256 `f32`. We limit to 16 poses per mesh, and a total of 256
poses.
More literature:
* Old babylone.js implementation (vertex attribute-based):
https://www.eternalcoding.com/dev-log-1-morph-targets/
* Babylone.js implementation (similar to ours):
https://www.youtube.com/watch?v=LBPRmGgU0PE
* GPU gems 3:
https://developer.nvidia.com/gpugems/gpugems3/part-i-geometry/chapter-3-directx-10-blend-shapes-breaking-limits
* Development discord thread
https://discord.com/channels/691052431525675048/1083325980615114772
https://user-images.githubusercontent.com/26321040/231181046-3bca2ab2-d4d9-472e-8098-639f1871ce2e.mp4
https://github.com/bevyengine/bevy/assets/26321040/d2a0c544-0ef8-45cf-9f99-8c3792f5a258
## Acknowledgements
* Thanks to `storytold` for sponsoring the feature
* Thanks to `superdump` and `james7132` for guidance and help figuring
out stuff
## Future work
- Handling of less and more attributes (eg: animated uv, animated
arbitrary attributes)
- Dynamic pose allocation (so that zero-weighted poses aren't uploaded
to GPU for example, enables much more total poses)
- Better animation API, see #8357
----
## Changelog
- Add morph targets to bevy meshes
- Support up to 64 poses per mesh of individually up to 116508 vertices,
animation currently strictly limited to the position, normal and tangent
attributes.
- Load a morph target using `Mesh::set_morph_targets`
- Add `VisitMorphTargets` and `VisitMorphAttributes` traits to
`bevy_render`, this allows defining morph targets (a fairly complex and
nested data structure) through iterators (ie: single copy instead of
passing around buffers), see documentation of those traits for details
- Add `MorphWeights` component exported by `bevy_render`
- `MorphWeights` control mesh's morph target weights, blending between
various poses defined as morph targets.
- `MorphWeights` are directly inherited by direct children (single level
of hierarchy) of an entity. This allows controlling several mesh
primitives through a unique entity _as per GLTF spec_.
- Add `MorphTargetNames` component, naming each indices of loaded morph
targets.
- Load morph targets weights and buffers in `bevy_gltf`
- handle morph targets animations in `bevy_animation` (previously, it
was a `warn!` log)
- Add the `MorphStressTest.gltf` asset for morph targets testing, taken
from the glTF samples repo, CC0.
- Add morph target manipulation to `scene_viewer`
- Separate the animation code in `scene_viewer` from the rest of the
code, reducing `#[cfg(feature)]` noise
- Add the `morph_targets.rs` example to show off how to manipulate morph
targets, loading `MorpStressTest.gltf`
## Migration Guide
- (very specialized, unlikely to be touched by 3rd parties)
- `MeshPipeline` now has a single `mesh_layouts` field rather than
separate `mesh_layout` and `skinned_mesh_layout` fields. You should
handle all possible mesh bind group layouts in your implementation
- You should also handle properly the new `MORPH_TARGETS` shader def and
mesh pipeline key. A new function is exposed to make this easier:
`setup_moprh_and_skinning_defs`
- The `MeshBindGroup` is now `MeshBindGroups`, cached bind groups are
now accessed through the `get` method.
[1]: https://en.wikipedia.org/wiki/Morph_target_animation
[2]:
https://registry.khronos.org/glTF/specs/2.0/glTF-2.0.html#morph-targets
---------
Co-authored-by: François <mockersf@gmail.com>
Co-authored-by: Carter Anderson <mcanders1@gmail.com>
2023-06-22 20:00:01 +00:00
|
|
|
/// Bind groups for meshes currently loaded.
|
|
|
|
#[derive(Resource, Default)]
|
|
|
|
pub struct MeshBindGroups {
|
|
|
|
model_only: Option<BindGroup>,
|
|
|
|
skinned: Option<BindGroup>,
|
|
|
|
morph_targets: HashMap<HandleId, BindGroup>,
|
|
|
|
}
|
|
|
|
impl MeshBindGroups {
|
|
|
|
pub fn reset(&mut self) {
|
|
|
|
self.model_only = None;
|
|
|
|
self.skinned = None;
|
|
|
|
self.morph_targets.clear();
|
|
|
|
}
|
|
|
|
/// Get the `BindGroup` for `GpuMesh` with given `handle_id`.
|
|
|
|
pub fn get(&self, handle_id: HandleId, is_skinned: bool, morph: bool) -> Option<&BindGroup> {
|
|
|
|
match (is_skinned, morph) {
|
|
|
|
(_, true) => self.morph_targets.get(&handle_id),
|
|
|
|
(true, false) => self.skinned.as_ref(),
|
|
|
|
(false, false) => self.model_only.as_ref(),
|
|
|
|
}
|
|
|
|
}
|
Shader Imports. Decouple Mesh logic from PBR (#3137)
## Shader Imports
This adds "whole file" shader imports. These come in two flavors:
### Asset Path Imports
```rust
// /assets/shaders/custom.wgsl
#import "shaders/custom_material.wgsl"
[[stage(fragment)]]
fn fragment() -> [[location(0)]] vec4<f32> {
return get_color();
}
```
```rust
// /assets/shaders/custom_material.wgsl
[[block]]
struct CustomMaterial {
color: vec4<f32>;
};
[[group(1), binding(0)]]
var<uniform> material: CustomMaterial;
```
### Custom Path Imports
Enables defining custom import paths. These are intended to be used by crates to export shader functionality:
```rust
// bevy_pbr2/src/render/pbr.wgsl
#import bevy_pbr::mesh_view_bind_group
#import bevy_pbr::mesh_bind_group
[[block]]
struct StandardMaterial {
base_color: vec4<f32>;
emissive: vec4<f32>;
perceptual_roughness: f32;
metallic: f32;
reflectance: f32;
flags: u32;
};
/* rest of PBR fragment shader here */
```
```rust
impl Plugin for MeshRenderPlugin {
fn build(&self, app: &mut bevy_app::App) {
let mut shaders = app.world.get_resource_mut::<Assets<Shader>>().unwrap();
shaders.set_untracked(
MESH_BIND_GROUP_HANDLE,
Shader::from_wgsl(include_str!("mesh_bind_group.wgsl"))
.with_import_path("bevy_pbr::mesh_bind_group"),
);
shaders.set_untracked(
MESH_VIEW_BIND_GROUP_HANDLE,
Shader::from_wgsl(include_str!("mesh_view_bind_group.wgsl"))
.with_import_path("bevy_pbr::mesh_view_bind_group"),
);
```
By convention these should use rust-style module paths that start with the crate name. Ultimately we might enforce this convention.
Note that this feature implements _run time_ import resolution. Ultimately we should move the import logic into an asset preprocessor once Bevy gets support for that.
## Decouple Mesh Logic from PBR Logic via MeshRenderPlugin
This breaks out mesh rendering code from PBR material code, which improves the legibility of the code, decouples mesh logic from PBR logic, and opens the door for a future `MaterialPlugin<T: Material>` that handles all of the pipeline setup for arbitrary shader materials.
## Removed `RenderAsset<Shader>` in favor of extracting shaders into RenderPipelineCache
This simplifies the shader import implementation and removes the need to pass around `RenderAssets<Shader>`.
## RenderCommands are now fallible
This allows us to cleanly handle pipelines+shaders not being ready yet. We can abort a render command early in these cases, preventing bevy from trying to bind group / do draw calls for pipelines that couldn't be bound. This could also be used in the future for things like "components not existing on entities yet".
# Next Steps
* Investigate using Naga for "partial typed imports" (ex: `#import bevy_pbr::material::StandardMaterial`, which would import only the StandardMaterial struct)
* Implement `MaterialPlugin<T: Material>` for low-boilerplate custom material shaders
* Move shader import logic into the asset preprocessor once bevy gets support for that.
Fixes #3132
2021-11-18 03:45:02 +00:00
|
|
|
}
|
|
|
|
|
Reorder render sets, refactor bevy_sprite to take advantage (#9236)
This is a continuation of this PR: #8062
# Objective
- Reorder render schedule sets to allow data preparation when phase item
order is known to support improved batching
- Part of the batching/instancing etc plan from here:
https://github.com/bevyengine/bevy/issues/89#issuecomment-1379249074
- The original idea came from @inodentry and proved to be a good one.
Thanks!
- Refactor `bevy_sprite` and `bevy_ui` to take advantage of the new
ordering
## Solution
- Move `Prepare` and `PrepareFlush` after `PhaseSortFlush`
- Add a `PrepareAssets` set that runs in parallel with other systems and
sets in the render schedule.
- Put prepare_assets systems in the `PrepareAssets` set
- If explicit dependencies are needed on Mesh or Material RenderAssets
then depend on the appropriate system.
- Add `ManageViews` and `ManageViewsFlush` sets between
`ExtractCommands` and Queue
- Move `queue_mesh*_bind_group` to the Prepare stage
- Rename them to `prepare_`
- Put systems that prepare resources (buffers, textures, etc.) into a
`PrepareResources` set inside `Prepare`
- Put the `prepare_..._bind_group` systems into a `PrepareBindGroup` set
after `PrepareResources`
- Move `prepare_lights` to the `ManageViews` set
- `prepare_lights` creates views and this must happen before `Queue`
- This system needs refactoring to stop handling all responsibilities
- Gather lights, sort, and create shadow map views. Store sorted light
entities in a resource
- Remove `BatchedPhaseItem`
- Replace `batch_range` with `batch_size` representing how many items to
skip after rendering the item or to skip the item entirely if
`batch_size` is 0.
- `queue_sprites` has been split into `queue_sprites` for queueing phase
items and `prepare_sprites` for batching after the `PhaseSort`
- `PhaseItem`s are still inserted in `queue_sprites`
- After sorting adjacent compatible sprite phase items are accumulated
into `SpriteBatch` components on the first entity of each batch,
containing a range of vertex indices. The associated `PhaseItem`'s
`batch_size` is updated appropriately.
- `SpriteBatch` items are then drawn skipping over the other items in
the batch based on the value in `batch_size`
- A very similar refactor was performed on `bevy_ui`
---
## Changelog
Changed:
- Reordered and reworked render app schedule sets. The main change is
that data is extracted, queued, sorted, and then prepared when the order
of data is known.
- Refactor `bevy_sprite` and `bevy_ui` to take advantage of the
reordering.
## Migration Guide
- Assets such as materials and meshes should now be created in
`PrepareAssets` e.g. `prepare_assets<Mesh>`
- Queueing entities to `RenderPhase`s continues to be done in `Queue`
e.g. `queue_sprites`
- Preparing resources (textures, buffers, etc.) should now be done in
`PrepareResources`, e.g. `prepare_prepass_textures`,
`prepare_mesh_uniforms`
- Prepare bind groups should now be done in `PrepareBindGroups` e.g.
`prepare_mesh_bind_group`
- Any batching or instancing can now be done in `Prepare` where the
order of the phase items is known e.g. `prepare_sprites`
## Next Steps
- Introduce some generic mechanism to ensure items that can be batched
are grouped in the phase item order, currently you could easily have
`[sprite at z 0, mesh at z 0, sprite at z 0]` preventing batching.
- Investigate improved orderings for building the MeshUniform buffer
- Implementing batching across the rest of bevy
---------
Co-authored-by: Robert Swain <robert.swain@gmail.com>
Co-authored-by: robtfm <50659922+robtfm@users.noreply.github.com>
2023-08-27 14:33:49 +00:00
|
|
|
pub fn prepare_mesh_bind_group(
|
Add morph targets (#8158)
# Objective
- Add morph targets to `bevy_pbr` (closes #5756) & load them from glTF
- Supersedes #3722
- Fixes #6814
[Morph targets][1] (also known as shape interpolation, shape keys, or
blend shapes) allow animating individual vertices with fine grained
controls. This is typically used for facial expressions. By specifying
multiple poses as vertex offset, and providing a set of weight of each
pose, it is possible to define surprisingly realistic transitions
between poses. Blending between multiple poses also allow composition.
Morph targets are part of the [gltf standard][2] and are a feature of
Unity and Unreal, and babylone.js, it is only natural to implement them
in bevy.
## Solution
This implementation of morph targets uses a 3d texture where each pixel
is a component of an animated attribute. Each layer is a different
target. We use a 2d texture for each target, because the number of
attribute×components×animated vertices is expected to always exceed the
maximum pixel row size limit of webGL2. It copies fairly closely the way
skinning is implemented on the CPU side, while on the GPU side, the
shader morph target implementation is a relatively trivial detail.
We add an optional `morph_texture` to the `Mesh` struct. The
`morph_texture` is built through a method that accepts an iterator over
attribute buffers.
The `MorphWeights` component, user-accessible, controls the blend of
poses used by mesh instances (so that multiple copy of the same mesh may
have different weights), all the weights are uploaded to a uniform
buffer of 256 `f32`. We limit to 16 poses per mesh, and a total of 256
poses.
More literature:
* Old babylone.js implementation (vertex attribute-based):
https://www.eternalcoding.com/dev-log-1-morph-targets/
* Babylone.js implementation (similar to ours):
https://www.youtube.com/watch?v=LBPRmGgU0PE
* GPU gems 3:
https://developer.nvidia.com/gpugems/gpugems3/part-i-geometry/chapter-3-directx-10-blend-shapes-breaking-limits
* Development discord thread
https://discord.com/channels/691052431525675048/1083325980615114772
https://user-images.githubusercontent.com/26321040/231181046-3bca2ab2-d4d9-472e-8098-639f1871ce2e.mp4
https://github.com/bevyengine/bevy/assets/26321040/d2a0c544-0ef8-45cf-9f99-8c3792f5a258
## Acknowledgements
* Thanks to `storytold` for sponsoring the feature
* Thanks to `superdump` and `james7132` for guidance and help figuring
out stuff
## Future work
- Handling of less and more attributes (eg: animated uv, animated
arbitrary attributes)
- Dynamic pose allocation (so that zero-weighted poses aren't uploaded
to GPU for example, enables much more total poses)
- Better animation API, see #8357
----
## Changelog
- Add morph targets to bevy meshes
- Support up to 64 poses per mesh of individually up to 116508 vertices,
animation currently strictly limited to the position, normal and tangent
attributes.
- Load a morph target using `Mesh::set_morph_targets`
- Add `VisitMorphTargets` and `VisitMorphAttributes` traits to
`bevy_render`, this allows defining morph targets (a fairly complex and
nested data structure) through iterators (ie: single copy instead of
passing around buffers), see documentation of those traits for details
- Add `MorphWeights` component exported by `bevy_render`
- `MorphWeights` control mesh's morph target weights, blending between
various poses defined as morph targets.
- `MorphWeights` are directly inherited by direct children (single level
of hierarchy) of an entity. This allows controlling several mesh
primitives through a unique entity _as per GLTF spec_.
- Add `MorphTargetNames` component, naming each indices of loaded morph
targets.
- Load morph targets weights and buffers in `bevy_gltf`
- handle morph targets animations in `bevy_animation` (previously, it
was a `warn!` log)
- Add the `MorphStressTest.gltf` asset for morph targets testing, taken
from the glTF samples repo, CC0.
- Add morph target manipulation to `scene_viewer`
- Separate the animation code in `scene_viewer` from the rest of the
code, reducing `#[cfg(feature)]` noise
- Add the `morph_targets.rs` example to show off how to manipulate morph
targets, loading `MorpStressTest.gltf`
## Migration Guide
- (very specialized, unlikely to be touched by 3rd parties)
- `MeshPipeline` now has a single `mesh_layouts` field rather than
separate `mesh_layout` and `skinned_mesh_layout` fields. You should
handle all possible mesh bind group layouts in your implementation
- You should also handle properly the new `MORPH_TARGETS` shader def and
mesh pipeline key. A new function is exposed to make this easier:
`setup_moprh_and_skinning_defs`
- The `MeshBindGroup` is now `MeshBindGroups`, cached bind groups are
now accessed through the `get` method.
[1]: https://en.wikipedia.org/wiki/Morph_target_animation
[2]:
https://registry.khronos.org/glTF/specs/2.0/glTF-2.0.html#morph-targets
---------
Co-authored-by: François <mockersf@gmail.com>
Co-authored-by: Carter Anderson <mcanders1@gmail.com>
2023-06-22 20:00:01 +00:00
|
|
|
meshes: Res<RenderAssets<Mesh>>,
|
|
|
|
mut groups: ResMut<MeshBindGroups>,
|
Shader Imports. Decouple Mesh logic from PBR (#3137)
## Shader Imports
This adds "whole file" shader imports. These come in two flavors:
### Asset Path Imports
```rust
// /assets/shaders/custom.wgsl
#import "shaders/custom_material.wgsl"
[[stage(fragment)]]
fn fragment() -> [[location(0)]] vec4<f32> {
return get_color();
}
```
```rust
// /assets/shaders/custom_material.wgsl
[[block]]
struct CustomMaterial {
color: vec4<f32>;
};
[[group(1), binding(0)]]
var<uniform> material: CustomMaterial;
```
### Custom Path Imports
Enables defining custom import paths. These are intended to be used by crates to export shader functionality:
```rust
// bevy_pbr2/src/render/pbr.wgsl
#import bevy_pbr::mesh_view_bind_group
#import bevy_pbr::mesh_bind_group
[[block]]
struct StandardMaterial {
base_color: vec4<f32>;
emissive: vec4<f32>;
perceptual_roughness: f32;
metallic: f32;
reflectance: f32;
flags: u32;
};
/* rest of PBR fragment shader here */
```
```rust
impl Plugin for MeshRenderPlugin {
fn build(&self, app: &mut bevy_app::App) {
let mut shaders = app.world.get_resource_mut::<Assets<Shader>>().unwrap();
shaders.set_untracked(
MESH_BIND_GROUP_HANDLE,
Shader::from_wgsl(include_str!("mesh_bind_group.wgsl"))
.with_import_path("bevy_pbr::mesh_bind_group"),
);
shaders.set_untracked(
MESH_VIEW_BIND_GROUP_HANDLE,
Shader::from_wgsl(include_str!("mesh_view_bind_group.wgsl"))
.with_import_path("bevy_pbr::mesh_view_bind_group"),
);
```
By convention these should use rust-style module paths that start with the crate name. Ultimately we might enforce this convention.
Note that this feature implements _run time_ import resolution. Ultimately we should move the import logic into an asset preprocessor once Bevy gets support for that.
## Decouple Mesh Logic from PBR Logic via MeshRenderPlugin
This breaks out mesh rendering code from PBR material code, which improves the legibility of the code, decouples mesh logic from PBR logic, and opens the door for a future `MaterialPlugin<T: Material>` that handles all of the pipeline setup for arbitrary shader materials.
## Removed `RenderAsset<Shader>` in favor of extracting shaders into RenderPipelineCache
This simplifies the shader import implementation and removes the need to pass around `RenderAssets<Shader>`.
## RenderCommands are now fallible
This allows us to cleanly handle pipelines+shaders not being ready yet. We can abort a render command early in these cases, preventing bevy from trying to bind group / do draw calls for pipelines that couldn't be bound. This could also be used in the future for things like "components not existing on entities yet".
# Next Steps
* Investigate using Naga for "partial typed imports" (ex: `#import bevy_pbr::material::StandardMaterial`, which would import only the StandardMaterial struct)
* Implement `MaterialPlugin<T: Material>` for low-boilerplate custom material shaders
* Move shader import logic into the asset preprocessor once bevy gets support for that.
Fixes #3132
2021-11-18 03:45:02 +00:00
|
|
|
mesh_pipeline: Res<MeshPipeline>,
|
|
|
|
render_device: Res<RenderDevice>,
|
Use GpuArrayBuffer for MeshUniform (#9254)
# Objective
- Reduce the number of rebindings to enable batching of draw commands
## Solution
- Use the new `GpuArrayBuffer` for `MeshUniform` data to store all
`MeshUniform` data in arrays within fewer bindings
- Sort opaque/alpha mask prepass, opaque/alpha mask main, and shadow
phases also by the batch per-object data binding dynamic offset to
improve performance on WebGL2.
---
## Changelog
- Changed: Per-object `MeshUniform` data is now managed by
`GpuArrayBuffer` as arrays in buffers that need to be indexed into.
## Migration Guide
Accessing the `model` member of an individual mesh object's shader
`Mesh` struct the old way where each `MeshUniform` was stored at its own
dynamic offset:
```rust
struct Vertex {
@location(0) position: vec3<f32>,
};
fn vertex(vertex: Vertex) -> VertexOutput {
var out: VertexOutput;
out.clip_position = mesh_position_local_to_clip(
mesh.model,
vec4<f32>(vertex.position, 1.0)
);
return out;
}
```
The new way where one needs to index into the array of `Mesh`es for the
batch:
```rust
struct Vertex {
@builtin(instance_index) instance_index: u32,
@location(0) position: vec3<f32>,
};
fn vertex(vertex: Vertex) -> VertexOutput {
var out: VertexOutput;
out.clip_position = mesh_position_local_to_clip(
mesh[vertex.instance_index].model,
vec4<f32>(vertex.position, 1.0)
);
return out;
}
```
Note that using the instance_index is the default way to pass the
per-object index into the shader, but if you wish to do custom rendering
approaches you can pass it in however you like.
---------
Co-authored-by: robtfm <50659922+robtfm@users.noreply.github.com>
Co-authored-by: Elabajaba <Elabajaba@users.noreply.github.com>
2023-07-30 13:17:08 +00:00
|
|
|
mesh_uniforms: Res<GpuArrayBuffer<MeshUniform>>,
|
2022-03-29 18:31:13 +00:00
|
|
|
skinned_mesh_uniform: Res<SkinnedMeshUniform>,
|
Add morph targets (#8158)
# Objective
- Add morph targets to `bevy_pbr` (closes #5756) & load them from glTF
- Supersedes #3722
- Fixes #6814
[Morph targets][1] (also known as shape interpolation, shape keys, or
blend shapes) allow animating individual vertices with fine grained
controls. This is typically used for facial expressions. By specifying
multiple poses as vertex offset, and providing a set of weight of each
pose, it is possible to define surprisingly realistic transitions
between poses. Blending between multiple poses also allow composition.
Morph targets are part of the [gltf standard][2] and are a feature of
Unity and Unreal, and babylone.js, it is only natural to implement them
in bevy.
## Solution
This implementation of morph targets uses a 3d texture where each pixel
is a component of an animated attribute. Each layer is a different
target. We use a 2d texture for each target, because the number of
attribute×components×animated vertices is expected to always exceed the
maximum pixel row size limit of webGL2. It copies fairly closely the way
skinning is implemented on the CPU side, while on the GPU side, the
shader morph target implementation is a relatively trivial detail.
We add an optional `morph_texture` to the `Mesh` struct. The
`morph_texture` is built through a method that accepts an iterator over
attribute buffers.
The `MorphWeights` component, user-accessible, controls the blend of
poses used by mesh instances (so that multiple copy of the same mesh may
have different weights), all the weights are uploaded to a uniform
buffer of 256 `f32`. We limit to 16 poses per mesh, and a total of 256
poses.
More literature:
* Old babylone.js implementation (vertex attribute-based):
https://www.eternalcoding.com/dev-log-1-morph-targets/
* Babylone.js implementation (similar to ours):
https://www.youtube.com/watch?v=LBPRmGgU0PE
* GPU gems 3:
https://developer.nvidia.com/gpugems/gpugems3/part-i-geometry/chapter-3-directx-10-blend-shapes-breaking-limits
* Development discord thread
https://discord.com/channels/691052431525675048/1083325980615114772
https://user-images.githubusercontent.com/26321040/231181046-3bca2ab2-d4d9-472e-8098-639f1871ce2e.mp4
https://github.com/bevyengine/bevy/assets/26321040/d2a0c544-0ef8-45cf-9f99-8c3792f5a258
## Acknowledgements
* Thanks to `storytold` for sponsoring the feature
* Thanks to `superdump` and `james7132` for guidance and help figuring
out stuff
## Future work
- Handling of less and more attributes (eg: animated uv, animated
arbitrary attributes)
- Dynamic pose allocation (so that zero-weighted poses aren't uploaded
to GPU for example, enables much more total poses)
- Better animation API, see #8357
----
## Changelog
- Add morph targets to bevy meshes
- Support up to 64 poses per mesh of individually up to 116508 vertices,
animation currently strictly limited to the position, normal and tangent
attributes.
- Load a morph target using `Mesh::set_morph_targets`
- Add `VisitMorphTargets` and `VisitMorphAttributes` traits to
`bevy_render`, this allows defining morph targets (a fairly complex and
nested data structure) through iterators (ie: single copy instead of
passing around buffers), see documentation of those traits for details
- Add `MorphWeights` component exported by `bevy_render`
- `MorphWeights` control mesh's morph target weights, blending between
various poses defined as morph targets.
- `MorphWeights` are directly inherited by direct children (single level
of hierarchy) of an entity. This allows controlling several mesh
primitives through a unique entity _as per GLTF spec_.
- Add `MorphTargetNames` component, naming each indices of loaded morph
targets.
- Load morph targets weights and buffers in `bevy_gltf`
- handle morph targets animations in `bevy_animation` (previously, it
was a `warn!` log)
- Add the `MorphStressTest.gltf` asset for morph targets testing, taken
from the glTF samples repo, CC0.
- Add morph target manipulation to `scene_viewer`
- Separate the animation code in `scene_viewer` from the rest of the
code, reducing `#[cfg(feature)]` noise
- Add the `morph_targets.rs` example to show off how to manipulate morph
targets, loading `MorpStressTest.gltf`
## Migration Guide
- (very specialized, unlikely to be touched by 3rd parties)
- `MeshPipeline` now has a single `mesh_layouts` field rather than
separate `mesh_layout` and `skinned_mesh_layout` fields. You should
handle all possible mesh bind group layouts in your implementation
- You should also handle properly the new `MORPH_TARGETS` shader def and
mesh pipeline key. A new function is exposed to make this easier:
`setup_moprh_and_skinning_defs`
- The `MeshBindGroup` is now `MeshBindGroups`, cached bind groups are
now accessed through the `get` method.
[1]: https://en.wikipedia.org/wiki/Morph_target_animation
[2]:
https://registry.khronos.org/glTF/specs/2.0/glTF-2.0.html#morph-targets
---------
Co-authored-by: François <mockersf@gmail.com>
Co-authored-by: Carter Anderson <mcanders1@gmail.com>
2023-06-22 20:00:01 +00:00
|
|
|
weights_uniform: Res<MorphUniform>,
|
Shader Imports. Decouple Mesh logic from PBR (#3137)
## Shader Imports
This adds "whole file" shader imports. These come in two flavors:
### Asset Path Imports
```rust
// /assets/shaders/custom.wgsl
#import "shaders/custom_material.wgsl"
[[stage(fragment)]]
fn fragment() -> [[location(0)]] vec4<f32> {
return get_color();
}
```
```rust
// /assets/shaders/custom_material.wgsl
[[block]]
struct CustomMaterial {
color: vec4<f32>;
};
[[group(1), binding(0)]]
var<uniform> material: CustomMaterial;
```
### Custom Path Imports
Enables defining custom import paths. These are intended to be used by crates to export shader functionality:
```rust
// bevy_pbr2/src/render/pbr.wgsl
#import bevy_pbr::mesh_view_bind_group
#import bevy_pbr::mesh_bind_group
[[block]]
struct StandardMaterial {
base_color: vec4<f32>;
emissive: vec4<f32>;
perceptual_roughness: f32;
metallic: f32;
reflectance: f32;
flags: u32;
};
/* rest of PBR fragment shader here */
```
```rust
impl Plugin for MeshRenderPlugin {
fn build(&self, app: &mut bevy_app::App) {
let mut shaders = app.world.get_resource_mut::<Assets<Shader>>().unwrap();
shaders.set_untracked(
MESH_BIND_GROUP_HANDLE,
Shader::from_wgsl(include_str!("mesh_bind_group.wgsl"))
.with_import_path("bevy_pbr::mesh_bind_group"),
);
shaders.set_untracked(
MESH_VIEW_BIND_GROUP_HANDLE,
Shader::from_wgsl(include_str!("mesh_view_bind_group.wgsl"))
.with_import_path("bevy_pbr::mesh_view_bind_group"),
);
```
By convention these should use rust-style module paths that start with the crate name. Ultimately we might enforce this convention.
Note that this feature implements _run time_ import resolution. Ultimately we should move the import logic into an asset preprocessor once Bevy gets support for that.
## Decouple Mesh Logic from PBR Logic via MeshRenderPlugin
This breaks out mesh rendering code from PBR material code, which improves the legibility of the code, decouples mesh logic from PBR logic, and opens the door for a future `MaterialPlugin<T: Material>` that handles all of the pipeline setup for arbitrary shader materials.
## Removed `RenderAsset<Shader>` in favor of extracting shaders into RenderPipelineCache
This simplifies the shader import implementation and removes the need to pass around `RenderAssets<Shader>`.
## RenderCommands are now fallible
This allows us to cleanly handle pipelines+shaders not being ready yet. We can abort a render command early in these cases, preventing bevy from trying to bind group / do draw calls for pipelines that couldn't be bound. This could also be used in the future for things like "components not existing on entities yet".
# Next Steps
* Investigate using Naga for "partial typed imports" (ex: `#import bevy_pbr::material::StandardMaterial`, which would import only the StandardMaterial struct)
* Implement `MaterialPlugin<T: Material>` for low-boilerplate custom material shaders
* Move shader import logic into the asset preprocessor once bevy gets support for that.
Fixes #3132
2021-11-18 03:45:02 +00:00
|
|
|
) {
|
Add morph targets (#8158)
# Objective
- Add morph targets to `bevy_pbr` (closes #5756) & load them from glTF
- Supersedes #3722
- Fixes #6814
[Morph targets][1] (also known as shape interpolation, shape keys, or
blend shapes) allow animating individual vertices with fine grained
controls. This is typically used for facial expressions. By specifying
multiple poses as vertex offset, and providing a set of weight of each
pose, it is possible to define surprisingly realistic transitions
between poses. Blending between multiple poses also allow composition.
Morph targets are part of the [gltf standard][2] and are a feature of
Unity and Unreal, and babylone.js, it is only natural to implement them
in bevy.
## Solution
This implementation of morph targets uses a 3d texture where each pixel
is a component of an animated attribute. Each layer is a different
target. We use a 2d texture for each target, because the number of
attribute×components×animated vertices is expected to always exceed the
maximum pixel row size limit of webGL2. It copies fairly closely the way
skinning is implemented on the CPU side, while on the GPU side, the
shader morph target implementation is a relatively trivial detail.
We add an optional `morph_texture` to the `Mesh` struct. The
`morph_texture` is built through a method that accepts an iterator over
attribute buffers.
The `MorphWeights` component, user-accessible, controls the blend of
poses used by mesh instances (so that multiple copy of the same mesh may
have different weights), all the weights are uploaded to a uniform
buffer of 256 `f32`. We limit to 16 poses per mesh, and a total of 256
poses.
More literature:
* Old babylone.js implementation (vertex attribute-based):
https://www.eternalcoding.com/dev-log-1-morph-targets/
* Babylone.js implementation (similar to ours):
https://www.youtube.com/watch?v=LBPRmGgU0PE
* GPU gems 3:
https://developer.nvidia.com/gpugems/gpugems3/part-i-geometry/chapter-3-directx-10-blend-shapes-breaking-limits
* Development discord thread
https://discord.com/channels/691052431525675048/1083325980615114772
https://user-images.githubusercontent.com/26321040/231181046-3bca2ab2-d4d9-472e-8098-639f1871ce2e.mp4
https://github.com/bevyengine/bevy/assets/26321040/d2a0c544-0ef8-45cf-9f99-8c3792f5a258
## Acknowledgements
* Thanks to `storytold` for sponsoring the feature
* Thanks to `superdump` and `james7132` for guidance and help figuring
out stuff
## Future work
- Handling of less and more attributes (eg: animated uv, animated
arbitrary attributes)
- Dynamic pose allocation (so that zero-weighted poses aren't uploaded
to GPU for example, enables much more total poses)
- Better animation API, see #8357
----
## Changelog
- Add morph targets to bevy meshes
- Support up to 64 poses per mesh of individually up to 116508 vertices,
animation currently strictly limited to the position, normal and tangent
attributes.
- Load a morph target using `Mesh::set_morph_targets`
- Add `VisitMorphTargets` and `VisitMorphAttributes` traits to
`bevy_render`, this allows defining morph targets (a fairly complex and
nested data structure) through iterators (ie: single copy instead of
passing around buffers), see documentation of those traits for details
- Add `MorphWeights` component exported by `bevy_render`
- `MorphWeights` control mesh's morph target weights, blending between
various poses defined as morph targets.
- `MorphWeights` are directly inherited by direct children (single level
of hierarchy) of an entity. This allows controlling several mesh
primitives through a unique entity _as per GLTF spec_.
- Add `MorphTargetNames` component, naming each indices of loaded morph
targets.
- Load morph targets weights and buffers in `bevy_gltf`
- handle morph targets animations in `bevy_animation` (previously, it
was a `warn!` log)
- Add the `MorphStressTest.gltf` asset for morph targets testing, taken
from the glTF samples repo, CC0.
- Add morph target manipulation to `scene_viewer`
- Separate the animation code in `scene_viewer` from the rest of the
code, reducing `#[cfg(feature)]` noise
- Add the `morph_targets.rs` example to show off how to manipulate morph
targets, loading `MorpStressTest.gltf`
## Migration Guide
- (very specialized, unlikely to be touched by 3rd parties)
- `MeshPipeline` now has a single `mesh_layouts` field rather than
separate `mesh_layout` and `skinned_mesh_layout` fields. You should
handle all possible mesh bind group layouts in your implementation
- You should also handle properly the new `MORPH_TARGETS` shader def and
mesh pipeline key. A new function is exposed to make this easier:
`setup_moprh_and_skinning_defs`
- The `MeshBindGroup` is now `MeshBindGroups`, cached bind groups are
now accessed through the `get` method.
[1]: https://en.wikipedia.org/wiki/Morph_target_animation
[2]:
https://registry.khronos.org/glTF/specs/2.0/glTF-2.0.html#morph-targets
---------
Co-authored-by: François <mockersf@gmail.com>
Co-authored-by: Carter Anderson <mcanders1@gmail.com>
2023-06-22 20:00:01 +00:00
|
|
|
groups.reset();
|
|
|
|
let layouts = &mesh_pipeline.mesh_layouts;
|
Use GpuArrayBuffer for MeshUniform (#9254)
# Objective
- Reduce the number of rebindings to enable batching of draw commands
## Solution
- Use the new `GpuArrayBuffer` for `MeshUniform` data to store all
`MeshUniform` data in arrays within fewer bindings
- Sort opaque/alpha mask prepass, opaque/alpha mask main, and shadow
phases also by the batch per-object data binding dynamic offset to
improve performance on WebGL2.
---
## Changelog
- Changed: Per-object `MeshUniform` data is now managed by
`GpuArrayBuffer` as arrays in buffers that need to be indexed into.
## Migration Guide
Accessing the `model` member of an individual mesh object's shader
`Mesh` struct the old way where each `MeshUniform` was stored at its own
dynamic offset:
```rust
struct Vertex {
@location(0) position: vec3<f32>,
};
fn vertex(vertex: Vertex) -> VertexOutput {
var out: VertexOutput;
out.clip_position = mesh_position_local_to_clip(
mesh.model,
vec4<f32>(vertex.position, 1.0)
);
return out;
}
```
The new way where one needs to index into the array of `Mesh`es for the
batch:
```rust
struct Vertex {
@builtin(instance_index) instance_index: u32,
@location(0) position: vec3<f32>,
};
fn vertex(vertex: Vertex) -> VertexOutput {
var out: VertexOutput;
out.clip_position = mesh_position_local_to_clip(
mesh[vertex.instance_index].model,
vec4<f32>(vertex.position, 1.0)
);
return out;
}
```
Note that using the instance_index is the default way to pass the
per-object index into the shader, but if you wish to do custom rendering
approaches you can pass it in however you like.
---------
Co-authored-by: robtfm <50659922+robtfm@users.noreply.github.com>
Co-authored-by: Elabajaba <Elabajaba@users.noreply.github.com>
2023-07-30 13:17:08 +00:00
|
|
|
let Some(model) = mesh_uniforms.binding() else {
|
Add morph targets (#8158)
# Objective
- Add morph targets to `bevy_pbr` (closes #5756) & load them from glTF
- Supersedes #3722
- Fixes #6814
[Morph targets][1] (also known as shape interpolation, shape keys, or
blend shapes) allow animating individual vertices with fine grained
controls. This is typically used for facial expressions. By specifying
multiple poses as vertex offset, and providing a set of weight of each
pose, it is possible to define surprisingly realistic transitions
between poses. Blending between multiple poses also allow composition.
Morph targets are part of the [gltf standard][2] and are a feature of
Unity and Unreal, and babylone.js, it is only natural to implement them
in bevy.
## Solution
This implementation of morph targets uses a 3d texture where each pixel
is a component of an animated attribute. Each layer is a different
target. We use a 2d texture for each target, because the number of
attribute×components×animated vertices is expected to always exceed the
maximum pixel row size limit of webGL2. It copies fairly closely the way
skinning is implemented on the CPU side, while on the GPU side, the
shader morph target implementation is a relatively trivial detail.
We add an optional `morph_texture` to the `Mesh` struct. The
`morph_texture` is built through a method that accepts an iterator over
attribute buffers.
The `MorphWeights` component, user-accessible, controls the blend of
poses used by mesh instances (so that multiple copy of the same mesh may
have different weights), all the weights are uploaded to a uniform
buffer of 256 `f32`. We limit to 16 poses per mesh, and a total of 256
poses.
More literature:
* Old babylone.js implementation (vertex attribute-based):
https://www.eternalcoding.com/dev-log-1-morph-targets/
* Babylone.js implementation (similar to ours):
https://www.youtube.com/watch?v=LBPRmGgU0PE
* GPU gems 3:
https://developer.nvidia.com/gpugems/gpugems3/part-i-geometry/chapter-3-directx-10-blend-shapes-breaking-limits
* Development discord thread
https://discord.com/channels/691052431525675048/1083325980615114772
https://user-images.githubusercontent.com/26321040/231181046-3bca2ab2-d4d9-472e-8098-639f1871ce2e.mp4
https://github.com/bevyengine/bevy/assets/26321040/d2a0c544-0ef8-45cf-9f99-8c3792f5a258
## Acknowledgements
* Thanks to `storytold` for sponsoring the feature
* Thanks to `superdump` and `james7132` for guidance and help figuring
out stuff
## Future work
- Handling of less and more attributes (eg: animated uv, animated
arbitrary attributes)
- Dynamic pose allocation (so that zero-weighted poses aren't uploaded
to GPU for example, enables much more total poses)
- Better animation API, see #8357
----
## Changelog
- Add morph targets to bevy meshes
- Support up to 64 poses per mesh of individually up to 116508 vertices,
animation currently strictly limited to the position, normal and tangent
attributes.
- Load a morph target using `Mesh::set_morph_targets`
- Add `VisitMorphTargets` and `VisitMorphAttributes` traits to
`bevy_render`, this allows defining morph targets (a fairly complex and
nested data structure) through iterators (ie: single copy instead of
passing around buffers), see documentation of those traits for details
- Add `MorphWeights` component exported by `bevy_render`
- `MorphWeights` control mesh's morph target weights, blending between
various poses defined as morph targets.
- `MorphWeights` are directly inherited by direct children (single level
of hierarchy) of an entity. This allows controlling several mesh
primitives through a unique entity _as per GLTF spec_.
- Add `MorphTargetNames` component, naming each indices of loaded morph
targets.
- Load morph targets weights and buffers in `bevy_gltf`
- handle morph targets animations in `bevy_animation` (previously, it
was a `warn!` log)
- Add the `MorphStressTest.gltf` asset for morph targets testing, taken
from the glTF samples repo, CC0.
- Add morph target manipulation to `scene_viewer`
- Separate the animation code in `scene_viewer` from the rest of the
code, reducing `#[cfg(feature)]` noise
- Add the `morph_targets.rs` example to show off how to manipulate morph
targets, loading `MorpStressTest.gltf`
## Migration Guide
- (very specialized, unlikely to be touched by 3rd parties)
- `MeshPipeline` now has a single `mesh_layouts` field rather than
separate `mesh_layout` and `skinned_mesh_layout` fields. You should
handle all possible mesh bind group layouts in your implementation
- You should also handle properly the new `MORPH_TARGETS` shader def and
mesh pipeline key. A new function is exposed to make this easier:
`setup_moprh_and_skinning_defs`
- The `MeshBindGroup` is now `MeshBindGroups`, cached bind groups are
now accessed through the `get` method.
[1]: https://en.wikipedia.org/wiki/Morph_target_animation
[2]:
https://registry.khronos.org/glTF/specs/2.0/glTF-2.0.html#morph-targets
---------
Co-authored-by: François <mockersf@gmail.com>
Co-authored-by: Carter Anderson <mcanders1@gmail.com>
2023-06-22 20:00:01 +00:00
|
|
|
return;
|
|
|
|
};
|
Use GpuArrayBuffer for MeshUniform (#9254)
# Objective
- Reduce the number of rebindings to enable batching of draw commands
## Solution
- Use the new `GpuArrayBuffer` for `MeshUniform` data to store all
`MeshUniform` data in arrays within fewer bindings
- Sort opaque/alpha mask prepass, opaque/alpha mask main, and shadow
phases also by the batch per-object data binding dynamic offset to
improve performance on WebGL2.
---
## Changelog
- Changed: Per-object `MeshUniform` data is now managed by
`GpuArrayBuffer` as arrays in buffers that need to be indexed into.
## Migration Guide
Accessing the `model` member of an individual mesh object's shader
`Mesh` struct the old way where each `MeshUniform` was stored at its own
dynamic offset:
```rust
struct Vertex {
@location(0) position: vec3<f32>,
};
fn vertex(vertex: Vertex) -> VertexOutput {
var out: VertexOutput;
out.clip_position = mesh_position_local_to_clip(
mesh.model,
vec4<f32>(vertex.position, 1.0)
);
return out;
}
```
The new way where one needs to index into the array of `Mesh`es for the
batch:
```rust
struct Vertex {
@builtin(instance_index) instance_index: u32,
@location(0) position: vec3<f32>,
};
fn vertex(vertex: Vertex) -> VertexOutput {
var out: VertexOutput;
out.clip_position = mesh_position_local_to_clip(
mesh[vertex.instance_index].model,
vec4<f32>(vertex.position, 1.0)
);
return out;
}
```
Note that using the instance_index is the default way to pass the
per-object index into the shader, but if you wish to do custom rendering
approaches you can pass it in however you like.
---------
Co-authored-by: robtfm <50659922+robtfm@users.noreply.github.com>
Co-authored-by: Elabajaba <Elabajaba@users.noreply.github.com>
2023-07-30 13:17:08 +00:00
|
|
|
groups.model_only = Some(layouts.model_only(&render_device, &model));
|
2022-03-29 18:31:13 +00:00
|
|
|
|
Add morph targets (#8158)
# Objective
- Add morph targets to `bevy_pbr` (closes #5756) & load them from glTF
- Supersedes #3722
- Fixes #6814
[Morph targets][1] (also known as shape interpolation, shape keys, or
blend shapes) allow animating individual vertices with fine grained
controls. This is typically used for facial expressions. By specifying
multiple poses as vertex offset, and providing a set of weight of each
pose, it is possible to define surprisingly realistic transitions
between poses. Blending between multiple poses also allow composition.
Morph targets are part of the [gltf standard][2] and are a feature of
Unity and Unreal, and babylone.js, it is only natural to implement them
in bevy.
## Solution
This implementation of morph targets uses a 3d texture where each pixel
is a component of an animated attribute. Each layer is a different
target. We use a 2d texture for each target, because the number of
attribute×components×animated vertices is expected to always exceed the
maximum pixel row size limit of webGL2. It copies fairly closely the way
skinning is implemented on the CPU side, while on the GPU side, the
shader morph target implementation is a relatively trivial detail.
We add an optional `morph_texture` to the `Mesh` struct. The
`morph_texture` is built through a method that accepts an iterator over
attribute buffers.
The `MorphWeights` component, user-accessible, controls the blend of
poses used by mesh instances (so that multiple copy of the same mesh may
have different weights), all the weights are uploaded to a uniform
buffer of 256 `f32`. We limit to 16 poses per mesh, and a total of 256
poses.
More literature:
* Old babylone.js implementation (vertex attribute-based):
https://www.eternalcoding.com/dev-log-1-morph-targets/
* Babylone.js implementation (similar to ours):
https://www.youtube.com/watch?v=LBPRmGgU0PE
* GPU gems 3:
https://developer.nvidia.com/gpugems/gpugems3/part-i-geometry/chapter-3-directx-10-blend-shapes-breaking-limits
* Development discord thread
https://discord.com/channels/691052431525675048/1083325980615114772
https://user-images.githubusercontent.com/26321040/231181046-3bca2ab2-d4d9-472e-8098-639f1871ce2e.mp4
https://github.com/bevyengine/bevy/assets/26321040/d2a0c544-0ef8-45cf-9f99-8c3792f5a258
## Acknowledgements
* Thanks to `storytold` for sponsoring the feature
* Thanks to `superdump` and `james7132` for guidance and help figuring
out stuff
## Future work
- Handling of less and more attributes (eg: animated uv, animated
arbitrary attributes)
- Dynamic pose allocation (so that zero-weighted poses aren't uploaded
to GPU for example, enables much more total poses)
- Better animation API, see #8357
----
## Changelog
- Add morph targets to bevy meshes
- Support up to 64 poses per mesh of individually up to 116508 vertices,
animation currently strictly limited to the position, normal and tangent
attributes.
- Load a morph target using `Mesh::set_morph_targets`
- Add `VisitMorphTargets` and `VisitMorphAttributes` traits to
`bevy_render`, this allows defining morph targets (a fairly complex and
nested data structure) through iterators (ie: single copy instead of
passing around buffers), see documentation of those traits for details
- Add `MorphWeights` component exported by `bevy_render`
- `MorphWeights` control mesh's morph target weights, blending between
various poses defined as morph targets.
- `MorphWeights` are directly inherited by direct children (single level
of hierarchy) of an entity. This allows controlling several mesh
primitives through a unique entity _as per GLTF spec_.
- Add `MorphTargetNames` component, naming each indices of loaded morph
targets.
- Load morph targets weights and buffers in `bevy_gltf`
- handle morph targets animations in `bevy_animation` (previously, it
was a `warn!` log)
- Add the `MorphStressTest.gltf` asset for morph targets testing, taken
from the glTF samples repo, CC0.
- Add morph target manipulation to `scene_viewer`
- Separate the animation code in `scene_viewer` from the rest of the
code, reducing `#[cfg(feature)]` noise
- Add the `morph_targets.rs` example to show off how to manipulate morph
targets, loading `MorpStressTest.gltf`
## Migration Guide
- (very specialized, unlikely to be touched by 3rd parties)
- `MeshPipeline` now has a single `mesh_layouts` field rather than
separate `mesh_layout` and `skinned_mesh_layout` fields. You should
handle all possible mesh bind group layouts in your implementation
- You should also handle properly the new `MORPH_TARGETS` shader def and
mesh pipeline key. A new function is exposed to make this easier:
`setup_moprh_and_skinning_defs`
- The `MeshBindGroup` is now `MeshBindGroups`, cached bind groups are
now accessed through the `get` method.
[1]: https://en.wikipedia.org/wiki/Morph_target_animation
[2]:
https://registry.khronos.org/glTF/specs/2.0/glTF-2.0.html#morph-targets
---------
Co-authored-by: François <mockersf@gmail.com>
Co-authored-by: Carter Anderson <mcanders1@gmail.com>
2023-06-22 20:00:01 +00:00
|
|
|
let skin = skinned_mesh_uniform.buffer.buffer();
|
|
|
|
if let Some(skin) = skin {
|
Use GpuArrayBuffer for MeshUniform (#9254)
# Objective
- Reduce the number of rebindings to enable batching of draw commands
## Solution
- Use the new `GpuArrayBuffer` for `MeshUniform` data to store all
`MeshUniform` data in arrays within fewer bindings
- Sort opaque/alpha mask prepass, opaque/alpha mask main, and shadow
phases also by the batch per-object data binding dynamic offset to
improve performance on WebGL2.
---
## Changelog
- Changed: Per-object `MeshUniform` data is now managed by
`GpuArrayBuffer` as arrays in buffers that need to be indexed into.
## Migration Guide
Accessing the `model` member of an individual mesh object's shader
`Mesh` struct the old way where each `MeshUniform` was stored at its own
dynamic offset:
```rust
struct Vertex {
@location(0) position: vec3<f32>,
};
fn vertex(vertex: Vertex) -> VertexOutput {
var out: VertexOutput;
out.clip_position = mesh_position_local_to_clip(
mesh.model,
vec4<f32>(vertex.position, 1.0)
);
return out;
}
```
The new way where one needs to index into the array of `Mesh`es for the
batch:
```rust
struct Vertex {
@builtin(instance_index) instance_index: u32,
@location(0) position: vec3<f32>,
};
fn vertex(vertex: Vertex) -> VertexOutput {
var out: VertexOutput;
out.clip_position = mesh_position_local_to_clip(
mesh[vertex.instance_index].model,
vec4<f32>(vertex.position, 1.0)
);
return out;
}
```
Note that using the instance_index is the default way to pass the
per-object index into the shader, but if you wish to do custom rendering
approaches you can pass it in however you like.
---------
Co-authored-by: robtfm <50659922+robtfm@users.noreply.github.com>
Co-authored-by: Elabajaba <Elabajaba@users.noreply.github.com>
2023-07-30 13:17:08 +00:00
|
|
|
groups.skinned = Some(layouts.skinned(&render_device, &model, skin));
|
Add morph targets (#8158)
# Objective
- Add morph targets to `bevy_pbr` (closes #5756) & load them from glTF
- Supersedes #3722
- Fixes #6814
[Morph targets][1] (also known as shape interpolation, shape keys, or
blend shapes) allow animating individual vertices with fine grained
controls. This is typically used for facial expressions. By specifying
multiple poses as vertex offset, and providing a set of weight of each
pose, it is possible to define surprisingly realistic transitions
between poses. Blending between multiple poses also allow composition.
Morph targets are part of the [gltf standard][2] and are a feature of
Unity and Unreal, and babylone.js, it is only natural to implement them
in bevy.
## Solution
This implementation of morph targets uses a 3d texture where each pixel
is a component of an animated attribute. Each layer is a different
target. We use a 2d texture for each target, because the number of
attribute×components×animated vertices is expected to always exceed the
maximum pixel row size limit of webGL2. It copies fairly closely the way
skinning is implemented on the CPU side, while on the GPU side, the
shader morph target implementation is a relatively trivial detail.
We add an optional `morph_texture` to the `Mesh` struct. The
`morph_texture` is built through a method that accepts an iterator over
attribute buffers.
The `MorphWeights` component, user-accessible, controls the blend of
poses used by mesh instances (so that multiple copy of the same mesh may
have different weights), all the weights are uploaded to a uniform
buffer of 256 `f32`. We limit to 16 poses per mesh, and a total of 256
poses.
More literature:
* Old babylone.js implementation (vertex attribute-based):
https://www.eternalcoding.com/dev-log-1-morph-targets/
* Babylone.js implementation (similar to ours):
https://www.youtube.com/watch?v=LBPRmGgU0PE
* GPU gems 3:
https://developer.nvidia.com/gpugems/gpugems3/part-i-geometry/chapter-3-directx-10-blend-shapes-breaking-limits
* Development discord thread
https://discord.com/channels/691052431525675048/1083325980615114772
https://user-images.githubusercontent.com/26321040/231181046-3bca2ab2-d4d9-472e-8098-639f1871ce2e.mp4
https://github.com/bevyengine/bevy/assets/26321040/d2a0c544-0ef8-45cf-9f99-8c3792f5a258
## Acknowledgements
* Thanks to `storytold` for sponsoring the feature
* Thanks to `superdump` and `james7132` for guidance and help figuring
out stuff
## Future work
- Handling of less and more attributes (eg: animated uv, animated
arbitrary attributes)
- Dynamic pose allocation (so that zero-weighted poses aren't uploaded
to GPU for example, enables much more total poses)
- Better animation API, see #8357
----
## Changelog
- Add morph targets to bevy meshes
- Support up to 64 poses per mesh of individually up to 116508 vertices,
animation currently strictly limited to the position, normal and tangent
attributes.
- Load a morph target using `Mesh::set_morph_targets`
- Add `VisitMorphTargets` and `VisitMorphAttributes` traits to
`bevy_render`, this allows defining morph targets (a fairly complex and
nested data structure) through iterators (ie: single copy instead of
passing around buffers), see documentation of those traits for details
- Add `MorphWeights` component exported by `bevy_render`
- `MorphWeights` control mesh's morph target weights, blending between
various poses defined as morph targets.
- `MorphWeights` are directly inherited by direct children (single level
of hierarchy) of an entity. This allows controlling several mesh
primitives through a unique entity _as per GLTF spec_.
- Add `MorphTargetNames` component, naming each indices of loaded morph
targets.
- Load morph targets weights and buffers in `bevy_gltf`
- handle morph targets animations in `bevy_animation` (previously, it
was a `warn!` log)
- Add the `MorphStressTest.gltf` asset for morph targets testing, taken
from the glTF samples repo, CC0.
- Add morph target manipulation to `scene_viewer`
- Separate the animation code in `scene_viewer` from the rest of the
code, reducing `#[cfg(feature)]` noise
- Add the `morph_targets.rs` example to show off how to manipulate morph
targets, loading `MorpStressTest.gltf`
## Migration Guide
- (very specialized, unlikely to be touched by 3rd parties)
- `MeshPipeline` now has a single `mesh_layouts` field rather than
separate `mesh_layout` and `skinned_mesh_layout` fields. You should
handle all possible mesh bind group layouts in your implementation
- You should also handle properly the new `MORPH_TARGETS` shader def and
mesh pipeline key. A new function is exposed to make this easier:
`setup_moprh_and_skinning_defs`
- The `MeshBindGroup` is now `MeshBindGroups`, cached bind groups are
now accessed through the `get` method.
[1]: https://en.wikipedia.org/wiki/Morph_target_animation
[2]:
https://registry.khronos.org/glTF/specs/2.0/glTF-2.0.html#morph-targets
---------
Co-authored-by: François <mockersf@gmail.com>
Co-authored-by: Carter Anderson <mcanders1@gmail.com>
2023-06-22 20:00:01 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
if let Some(weights) = weights_uniform.buffer.buffer() {
|
|
|
|
for (id, gpu_mesh) in meshes.iter() {
|
|
|
|
if let Some(targets) = gpu_mesh.morph_targets.as_ref() {
|
|
|
|
let group = if let Some(skin) = skin.filter(|_| is_skinned(&gpu_mesh.layout)) {
|
Use GpuArrayBuffer for MeshUniform (#9254)
# Objective
- Reduce the number of rebindings to enable batching of draw commands
## Solution
- Use the new `GpuArrayBuffer` for `MeshUniform` data to store all
`MeshUniform` data in arrays within fewer bindings
- Sort opaque/alpha mask prepass, opaque/alpha mask main, and shadow
phases also by the batch per-object data binding dynamic offset to
improve performance on WebGL2.
---
## Changelog
- Changed: Per-object `MeshUniform` data is now managed by
`GpuArrayBuffer` as arrays in buffers that need to be indexed into.
## Migration Guide
Accessing the `model` member of an individual mesh object's shader
`Mesh` struct the old way where each `MeshUniform` was stored at its own
dynamic offset:
```rust
struct Vertex {
@location(0) position: vec3<f32>,
};
fn vertex(vertex: Vertex) -> VertexOutput {
var out: VertexOutput;
out.clip_position = mesh_position_local_to_clip(
mesh.model,
vec4<f32>(vertex.position, 1.0)
);
return out;
}
```
The new way where one needs to index into the array of `Mesh`es for the
batch:
```rust
struct Vertex {
@builtin(instance_index) instance_index: u32,
@location(0) position: vec3<f32>,
};
fn vertex(vertex: Vertex) -> VertexOutput {
var out: VertexOutput;
out.clip_position = mesh_position_local_to_clip(
mesh[vertex.instance_index].model,
vec4<f32>(vertex.position, 1.0)
);
return out;
}
```
Note that using the instance_index is the default way to pass the
per-object index into the shader, but if you wish to do custom rendering
approaches you can pass it in however you like.
---------
Co-authored-by: robtfm <50659922+robtfm@users.noreply.github.com>
Co-authored-by: Elabajaba <Elabajaba@users.noreply.github.com>
2023-07-30 13:17:08 +00:00
|
|
|
layouts.morphed_skinned(&render_device, &model, skin, weights, targets)
|
Add morph targets (#8158)
# Objective
- Add morph targets to `bevy_pbr` (closes #5756) & load them from glTF
- Supersedes #3722
- Fixes #6814
[Morph targets][1] (also known as shape interpolation, shape keys, or
blend shapes) allow animating individual vertices with fine grained
controls. This is typically used for facial expressions. By specifying
multiple poses as vertex offset, and providing a set of weight of each
pose, it is possible to define surprisingly realistic transitions
between poses. Blending between multiple poses also allow composition.
Morph targets are part of the [gltf standard][2] and are a feature of
Unity and Unreal, and babylone.js, it is only natural to implement them
in bevy.
## Solution
This implementation of morph targets uses a 3d texture where each pixel
is a component of an animated attribute. Each layer is a different
target. We use a 2d texture for each target, because the number of
attribute×components×animated vertices is expected to always exceed the
maximum pixel row size limit of webGL2. It copies fairly closely the way
skinning is implemented on the CPU side, while on the GPU side, the
shader morph target implementation is a relatively trivial detail.
We add an optional `morph_texture` to the `Mesh` struct. The
`morph_texture` is built through a method that accepts an iterator over
attribute buffers.
The `MorphWeights` component, user-accessible, controls the blend of
poses used by mesh instances (so that multiple copy of the same mesh may
have different weights), all the weights are uploaded to a uniform
buffer of 256 `f32`. We limit to 16 poses per mesh, and a total of 256
poses.
More literature:
* Old babylone.js implementation (vertex attribute-based):
https://www.eternalcoding.com/dev-log-1-morph-targets/
* Babylone.js implementation (similar to ours):
https://www.youtube.com/watch?v=LBPRmGgU0PE
* GPU gems 3:
https://developer.nvidia.com/gpugems/gpugems3/part-i-geometry/chapter-3-directx-10-blend-shapes-breaking-limits
* Development discord thread
https://discord.com/channels/691052431525675048/1083325980615114772
https://user-images.githubusercontent.com/26321040/231181046-3bca2ab2-d4d9-472e-8098-639f1871ce2e.mp4
https://github.com/bevyengine/bevy/assets/26321040/d2a0c544-0ef8-45cf-9f99-8c3792f5a258
## Acknowledgements
* Thanks to `storytold` for sponsoring the feature
* Thanks to `superdump` and `james7132` for guidance and help figuring
out stuff
## Future work
- Handling of less and more attributes (eg: animated uv, animated
arbitrary attributes)
- Dynamic pose allocation (so that zero-weighted poses aren't uploaded
to GPU for example, enables much more total poses)
- Better animation API, see #8357
----
## Changelog
- Add morph targets to bevy meshes
- Support up to 64 poses per mesh of individually up to 116508 vertices,
animation currently strictly limited to the position, normal and tangent
attributes.
- Load a morph target using `Mesh::set_morph_targets`
- Add `VisitMorphTargets` and `VisitMorphAttributes` traits to
`bevy_render`, this allows defining morph targets (a fairly complex and
nested data structure) through iterators (ie: single copy instead of
passing around buffers), see documentation of those traits for details
- Add `MorphWeights` component exported by `bevy_render`
- `MorphWeights` control mesh's morph target weights, blending between
various poses defined as morph targets.
- `MorphWeights` are directly inherited by direct children (single level
of hierarchy) of an entity. This allows controlling several mesh
primitives through a unique entity _as per GLTF spec_.
- Add `MorphTargetNames` component, naming each indices of loaded morph
targets.
- Load morph targets weights and buffers in `bevy_gltf`
- handle morph targets animations in `bevy_animation` (previously, it
was a `warn!` log)
- Add the `MorphStressTest.gltf` asset for morph targets testing, taken
from the glTF samples repo, CC0.
- Add morph target manipulation to `scene_viewer`
- Separate the animation code in `scene_viewer` from the rest of the
code, reducing `#[cfg(feature)]` noise
- Add the `morph_targets.rs` example to show off how to manipulate morph
targets, loading `MorpStressTest.gltf`
## Migration Guide
- (very specialized, unlikely to be touched by 3rd parties)
- `MeshPipeline` now has a single `mesh_layouts` field rather than
separate `mesh_layout` and `skinned_mesh_layout` fields. You should
handle all possible mesh bind group layouts in your implementation
- You should also handle properly the new `MORPH_TARGETS` shader def and
mesh pipeline key. A new function is exposed to make this easier:
`setup_moprh_and_skinning_defs`
- The `MeshBindGroup` is now `MeshBindGroups`, cached bind groups are
now accessed through the `get` method.
[1]: https://en.wikipedia.org/wiki/Morph_target_animation
[2]:
https://registry.khronos.org/glTF/specs/2.0/glTF-2.0.html#morph-targets
---------
Co-authored-by: François <mockersf@gmail.com>
Co-authored-by: Carter Anderson <mcanders1@gmail.com>
2023-06-22 20:00:01 +00:00
|
|
|
} else {
|
Use GpuArrayBuffer for MeshUniform (#9254)
# Objective
- Reduce the number of rebindings to enable batching of draw commands
## Solution
- Use the new `GpuArrayBuffer` for `MeshUniform` data to store all
`MeshUniform` data in arrays within fewer bindings
- Sort opaque/alpha mask prepass, opaque/alpha mask main, and shadow
phases also by the batch per-object data binding dynamic offset to
improve performance on WebGL2.
---
## Changelog
- Changed: Per-object `MeshUniform` data is now managed by
`GpuArrayBuffer` as arrays in buffers that need to be indexed into.
## Migration Guide
Accessing the `model` member of an individual mesh object's shader
`Mesh` struct the old way where each `MeshUniform` was stored at its own
dynamic offset:
```rust
struct Vertex {
@location(0) position: vec3<f32>,
};
fn vertex(vertex: Vertex) -> VertexOutput {
var out: VertexOutput;
out.clip_position = mesh_position_local_to_clip(
mesh.model,
vec4<f32>(vertex.position, 1.0)
);
return out;
}
```
The new way where one needs to index into the array of `Mesh`es for the
batch:
```rust
struct Vertex {
@builtin(instance_index) instance_index: u32,
@location(0) position: vec3<f32>,
};
fn vertex(vertex: Vertex) -> VertexOutput {
var out: VertexOutput;
out.clip_position = mesh_position_local_to_clip(
mesh[vertex.instance_index].model,
vec4<f32>(vertex.position, 1.0)
);
return out;
}
```
Note that using the instance_index is the default way to pass the
per-object index into the shader, but if you wish to do custom rendering
approaches you can pass it in however you like.
---------
Co-authored-by: robtfm <50659922+robtfm@users.noreply.github.com>
Co-authored-by: Elabajaba <Elabajaba@users.noreply.github.com>
2023-07-30 13:17:08 +00:00
|
|
|
layouts.morphed(&render_device, &model, weights, targets)
|
Add morph targets (#8158)
# Objective
- Add morph targets to `bevy_pbr` (closes #5756) & load them from glTF
- Supersedes #3722
- Fixes #6814
[Morph targets][1] (also known as shape interpolation, shape keys, or
blend shapes) allow animating individual vertices with fine grained
controls. This is typically used for facial expressions. By specifying
multiple poses as vertex offset, and providing a set of weight of each
pose, it is possible to define surprisingly realistic transitions
between poses. Blending between multiple poses also allow composition.
Morph targets are part of the [gltf standard][2] and are a feature of
Unity and Unreal, and babylone.js, it is only natural to implement them
in bevy.
## Solution
This implementation of morph targets uses a 3d texture where each pixel
is a component of an animated attribute. Each layer is a different
target. We use a 2d texture for each target, because the number of
attribute×components×animated vertices is expected to always exceed the
maximum pixel row size limit of webGL2. It copies fairly closely the way
skinning is implemented on the CPU side, while on the GPU side, the
shader morph target implementation is a relatively trivial detail.
We add an optional `morph_texture` to the `Mesh` struct. The
`morph_texture` is built through a method that accepts an iterator over
attribute buffers.
The `MorphWeights` component, user-accessible, controls the blend of
poses used by mesh instances (so that multiple copy of the same mesh may
have different weights), all the weights are uploaded to a uniform
buffer of 256 `f32`. We limit to 16 poses per mesh, and a total of 256
poses.
More literature:
* Old babylone.js implementation (vertex attribute-based):
https://www.eternalcoding.com/dev-log-1-morph-targets/
* Babylone.js implementation (similar to ours):
https://www.youtube.com/watch?v=LBPRmGgU0PE
* GPU gems 3:
https://developer.nvidia.com/gpugems/gpugems3/part-i-geometry/chapter-3-directx-10-blend-shapes-breaking-limits
* Development discord thread
https://discord.com/channels/691052431525675048/1083325980615114772
https://user-images.githubusercontent.com/26321040/231181046-3bca2ab2-d4d9-472e-8098-639f1871ce2e.mp4
https://github.com/bevyengine/bevy/assets/26321040/d2a0c544-0ef8-45cf-9f99-8c3792f5a258
## Acknowledgements
* Thanks to `storytold` for sponsoring the feature
* Thanks to `superdump` and `james7132` for guidance and help figuring
out stuff
## Future work
- Handling of less and more attributes (eg: animated uv, animated
arbitrary attributes)
- Dynamic pose allocation (so that zero-weighted poses aren't uploaded
to GPU for example, enables much more total poses)
- Better animation API, see #8357
----
## Changelog
- Add morph targets to bevy meshes
- Support up to 64 poses per mesh of individually up to 116508 vertices,
animation currently strictly limited to the position, normal and tangent
attributes.
- Load a morph target using `Mesh::set_morph_targets`
- Add `VisitMorphTargets` and `VisitMorphAttributes` traits to
`bevy_render`, this allows defining morph targets (a fairly complex and
nested data structure) through iterators (ie: single copy instead of
passing around buffers), see documentation of those traits for details
- Add `MorphWeights` component exported by `bevy_render`
- `MorphWeights` control mesh's morph target weights, blending between
various poses defined as morph targets.
- `MorphWeights` are directly inherited by direct children (single level
of hierarchy) of an entity. This allows controlling several mesh
primitives through a unique entity _as per GLTF spec_.
- Add `MorphTargetNames` component, naming each indices of loaded morph
targets.
- Load morph targets weights and buffers in `bevy_gltf`
- handle morph targets animations in `bevy_animation` (previously, it
was a `warn!` log)
- Add the `MorphStressTest.gltf` asset for morph targets testing, taken
from the glTF samples repo, CC0.
- Add morph target manipulation to `scene_viewer`
- Separate the animation code in `scene_viewer` from the rest of the
code, reducing `#[cfg(feature)]` noise
- Add the `morph_targets.rs` example to show off how to manipulate morph
targets, loading `MorpStressTest.gltf`
## Migration Guide
- (very specialized, unlikely to be touched by 3rd parties)
- `MeshPipeline` now has a single `mesh_layouts` field rather than
separate `mesh_layout` and `skinned_mesh_layout` fields. You should
handle all possible mesh bind group layouts in your implementation
- You should also handle properly the new `MORPH_TARGETS` shader def and
mesh pipeline key. A new function is exposed to make this easier:
`setup_moprh_and_skinning_defs`
- The `MeshBindGroup` is now `MeshBindGroups`, cached bind groups are
now accessed through the `get` method.
[1]: https://en.wikipedia.org/wiki/Morph_target_animation
[2]:
https://registry.khronos.org/glTF/specs/2.0/glTF-2.0.html#morph-targets
---------
Co-authored-by: François <mockersf@gmail.com>
Co-authored-by: Carter Anderson <mcanders1@gmail.com>
2023-06-22 20:00:01 +00:00
|
|
|
};
|
|
|
|
groups.morph_targets.insert(id.id(), group);
|
|
|
|
}
|
2022-03-29 18:31:13 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
Migrate to encase from crevice (#4339)
# Objective
- Unify buffer APIs
- Also see #4272
## Solution
- Replace vendored `crevice` with `encase`
---
## Changelog
Changed `StorageBuffer`
Added `DynamicStorageBuffer`
Replaced `UniformVec` with `UniformBuffer`
Replaced `DynamicUniformVec` with `DynamicUniformBuffer`
## Migration Guide
### `StorageBuffer`
removed `set_body()`, `values()`, `values_mut()`, `clear()`, `push()`, `append()`
added `set()`, `get()`, `get_mut()`
### `UniformVec` -> `UniformBuffer`
renamed `uniform_buffer()` to `buffer()`
removed `len()`, `is_empty()`, `capacity()`, `push()`, `reserve()`, `clear()`, `values()`
added `set()`, `get()`
### `DynamicUniformVec` -> `DynamicUniformBuffer`
renamed `uniform_buffer()` to `buffer()`
removed `capacity()`, `reserve()`
Co-authored-by: Carter Anderson <mcanders1@gmail.com>
2022-05-18 21:09:21 +00:00
|
|
|
// NOTE: This is using BufferVec because it is using a trick to allow a fixed-size array
|
|
|
|
// in a uniform buffer to be used like a variable-sized array by only writing the valid data
|
|
|
|
// into the buffer, knowing the number of valid items starting from the dynamic offset, and
|
|
|
|
// ignoring the rest, whether they're valid for other dynamic offsets or not. This trick may
|
|
|
|
// be supported later in encase, and then we should make use of it.
|
|
|
|
|
Make `Resource` trait opt-in, requiring `#[derive(Resource)]` V2 (#5577)
*This PR description is an edited copy of #5007, written by @alice-i-cecile.*
# Objective
Follow-up to https://github.com/bevyengine/bevy/pull/2254. The `Resource` trait currently has a blanket implementation for all types that meet its bounds.
While ergonomic, this results in several drawbacks:
* it is possible to make confusing, silent mistakes such as inserting a function pointer (Foo) rather than a value (Foo::Bar) as a resource
* it is challenging to discover if a type is intended to be used as a resource
* we cannot later add customization options (see the [RFC](https://github.com/bevyengine/rfcs/blob/main/rfcs/27-derive-component.md) for the equivalent choice for Component).
* dependencies can use the same Rust type as a resource in invisibly conflicting ways
* raw Rust types used as resources cannot preserve privacy appropriately, as anyone able to access that type can read and write to internal values
* we cannot capture a definitive list of possible resources to display to users in an editor
## Notes to reviewers
* Review this commit-by-commit; there's effectively no back-tracking and there's a lot of churn in some of these commits.
*ira: My commits are not as well organized :')*
* I've relaxed the bound on Local to Send + Sync + 'static: I don't think these concerns apply there, so this can keep things simple. Storing e.g. a u32 in a Local is fine, because there's a variable name attached explaining what it does.
* I think this is a bad place for the Resource trait to live, but I've left it in place to make reviewing easier. IMO that's best tackled with https://github.com/bevyengine/bevy/issues/4981.
## Changelog
`Resource` is no longer automatically implemented for all matching types. Instead, use the new `#[derive(Resource)]` macro.
## Migration Guide
Add `#[derive(Resource)]` to all types you are using as a resource.
If you are using a third party type as a resource, wrap it in a tuple struct to bypass orphan rules. Consider deriving `Deref` and `DerefMut` to improve ergonomics.
`ClearColor` no longer implements `Component`. Using `ClearColor` as a component in 0.8 did nothing.
Use the `ClearColorConfig` in the `Camera3d` and `Camera2d` components instead.
Co-authored-by: Alice <alice.i.cecile@gmail.com>
Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
Co-authored-by: devil-ira <justthecooldude@gmail.com>
Co-authored-by: Carter Anderson <mcanders1@gmail.com>
2022-08-08 21:36:35 +00:00
|
|
|
#[derive(Resource)]
|
2022-03-29 18:31:13 +00:00
|
|
|
pub struct SkinnedMeshUniform {
|
Migrate to encase from crevice (#4339)
# Objective
- Unify buffer APIs
- Also see #4272
## Solution
- Replace vendored `crevice` with `encase`
---
## Changelog
Changed `StorageBuffer`
Added `DynamicStorageBuffer`
Replaced `UniformVec` with `UniformBuffer`
Replaced `DynamicUniformVec` with `DynamicUniformBuffer`
## Migration Guide
### `StorageBuffer`
removed `set_body()`, `values()`, `values_mut()`, `clear()`, `push()`, `append()`
added `set()`, `get()`, `get_mut()`
### `UniformVec` -> `UniformBuffer`
renamed `uniform_buffer()` to `buffer()`
removed `len()`, `is_empty()`, `capacity()`, `push()`, `reserve()`, `clear()`, `values()`
added `set()`, `get()`
### `DynamicUniformVec` -> `DynamicUniformBuffer`
renamed `uniform_buffer()` to `buffer()`
removed `capacity()`, `reserve()`
Co-authored-by: Carter Anderson <mcanders1@gmail.com>
2022-05-18 21:09:21 +00:00
|
|
|
pub buffer: BufferVec<Mat4>,
|
2022-03-29 18:31:13 +00:00
|
|
|
}
|
|
|
|
|
2022-05-20 22:05:32 +00:00
|
|
|
impl Default for SkinnedMeshUniform {
|
|
|
|
fn default() -> Self {
|
|
|
|
Self {
|
|
|
|
buffer: BufferVec::new(BufferUsages::UNIFORM),
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2022-03-29 18:31:13 +00:00
|
|
|
pub fn prepare_skinned_meshes(
|
|
|
|
render_device: Res<RenderDevice>,
|
|
|
|
render_queue: Res<RenderQueue>,
|
|
|
|
mut skinned_mesh_uniform: ResMut<SkinnedMeshUniform>,
|
|
|
|
) {
|
2022-12-20 16:17:05 +00:00
|
|
|
if skinned_mesh_uniform.buffer.is_empty() {
|
2022-03-29 18:31:13 +00:00
|
|
|
return;
|
Shader Imports. Decouple Mesh logic from PBR (#3137)
## Shader Imports
This adds "whole file" shader imports. These come in two flavors:
### Asset Path Imports
```rust
// /assets/shaders/custom.wgsl
#import "shaders/custom_material.wgsl"
[[stage(fragment)]]
fn fragment() -> [[location(0)]] vec4<f32> {
return get_color();
}
```
```rust
// /assets/shaders/custom_material.wgsl
[[block]]
struct CustomMaterial {
color: vec4<f32>;
};
[[group(1), binding(0)]]
var<uniform> material: CustomMaterial;
```
### Custom Path Imports
Enables defining custom import paths. These are intended to be used by crates to export shader functionality:
```rust
// bevy_pbr2/src/render/pbr.wgsl
#import bevy_pbr::mesh_view_bind_group
#import bevy_pbr::mesh_bind_group
[[block]]
struct StandardMaterial {
base_color: vec4<f32>;
emissive: vec4<f32>;
perceptual_roughness: f32;
metallic: f32;
reflectance: f32;
flags: u32;
};
/* rest of PBR fragment shader here */
```
```rust
impl Plugin for MeshRenderPlugin {
fn build(&self, app: &mut bevy_app::App) {
let mut shaders = app.world.get_resource_mut::<Assets<Shader>>().unwrap();
shaders.set_untracked(
MESH_BIND_GROUP_HANDLE,
Shader::from_wgsl(include_str!("mesh_bind_group.wgsl"))
.with_import_path("bevy_pbr::mesh_bind_group"),
);
shaders.set_untracked(
MESH_VIEW_BIND_GROUP_HANDLE,
Shader::from_wgsl(include_str!("mesh_view_bind_group.wgsl"))
.with_import_path("bevy_pbr::mesh_view_bind_group"),
);
```
By convention these should use rust-style module paths that start with the crate name. Ultimately we might enforce this convention.
Note that this feature implements _run time_ import resolution. Ultimately we should move the import logic into an asset preprocessor once Bevy gets support for that.
## Decouple Mesh Logic from PBR Logic via MeshRenderPlugin
This breaks out mesh rendering code from PBR material code, which improves the legibility of the code, decouples mesh logic from PBR logic, and opens the door for a future `MaterialPlugin<T: Material>` that handles all of the pipeline setup for arbitrary shader materials.
## Removed `RenderAsset<Shader>` in favor of extracting shaders into RenderPipelineCache
This simplifies the shader import implementation and removes the need to pass around `RenderAssets<Shader>`.
## RenderCommands are now fallible
This allows us to cleanly handle pipelines+shaders not being ready yet. We can abort a render command early in these cases, preventing bevy from trying to bind group / do draw calls for pipelines that couldn't be bound. This could also be used in the future for things like "components not existing on entities yet".
# Next Steps
* Investigate using Naga for "partial typed imports" (ex: `#import bevy_pbr::material::StandardMaterial`, which would import only the StandardMaterial struct)
* Implement `MaterialPlugin<T: Material>` for low-boilerplate custom material shaders
* Move shader import logic into the asset preprocessor once bevy gets support for that.
Fixes #3132
2021-11-18 03:45:02 +00:00
|
|
|
}
|
2022-03-29 18:31:13 +00:00
|
|
|
|
2022-12-20 16:17:05 +00:00
|
|
|
let len = skinned_mesh_uniform.buffer.len();
|
|
|
|
skinned_mesh_uniform.buffer.reserve(len, &render_device);
|
2022-03-29 18:31:13 +00:00
|
|
|
skinned_mesh_uniform
|
|
|
|
.buffer
|
|
|
|
.write_buffer(&render_device, &render_queue);
|
Shader Imports. Decouple Mesh logic from PBR (#3137)
## Shader Imports
This adds "whole file" shader imports. These come in two flavors:
### Asset Path Imports
```rust
// /assets/shaders/custom.wgsl
#import "shaders/custom_material.wgsl"
[[stage(fragment)]]
fn fragment() -> [[location(0)]] vec4<f32> {
return get_color();
}
```
```rust
// /assets/shaders/custom_material.wgsl
[[block]]
struct CustomMaterial {
color: vec4<f32>;
};
[[group(1), binding(0)]]
var<uniform> material: CustomMaterial;
```
### Custom Path Imports
Enables defining custom import paths. These are intended to be used by crates to export shader functionality:
```rust
// bevy_pbr2/src/render/pbr.wgsl
#import bevy_pbr::mesh_view_bind_group
#import bevy_pbr::mesh_bind_group
[[block]]
struct StandardMaterial {
base_color: vec4<f32>;
emissive: vec4<f32>;
perceptual_roughness: f32;
metallic: f32;
reflectance: f32;
flags: u32;
};
/* rest of PBR fragment shader here */
```
```rust
impl Plugin for MeshRenderPlugin {
fn build(&self, app: &mut bevy_app::App) {
let mut shaders = app.world.get_resource_mut::<Assets<Shader>>().unwrap();
shaders.set_untracked(
MESH_BIND_GROUP_HANDLE,
Shader::from_wgsl(include_str!("mesh_bind_group.wgsl"))
.with_import_path("bevy_pbr::mesh_bind_group"),
);
shaders.set_untracked(
MESH_VIEW_BIND_GROUP_HANDLE,
Shader::from_wgsl(include_str!("mesh_view_bind_group.wgsl"))
.with_import_path("bevy_pbr::mesh_view_bind_group"),
);
```
By convention these should use rust-style module paths that start with the crate name. Ultimately we might enforce this convention.
Note that this feature implements _run time_ import resolution. Ultimately we should move the import logic into an asset preprocessor once Bevy gets support for that.
## Decouple Mesh Logic from PBR Logic via MeshRenderPlugin
This breaks out mesh rendering code from PBR material code, which improves the legibility of the code, decouples mesh logic from PBR logic, and opens the door for a future `MaterialPlugin<T: Material>` that handles all of the pipeline setup for arbitrary shader materials.
## Removed `RenderAsset<Shader>` in favor of extracting shaders into RenderPipelineCache
This simplifies the shader import implementation and removes the need to pass around `RenderAssets<Shader>`.
## RenderCommands are now fallible
This allows us to cleanly handle pipelines+shaders not being ready yet. We can abort a render command early in these cases, preventing bevy from trying to bind group / do draw calls for pipelines that couldn't be bound. This could also be used in the future for things like "components not existing on entities yet".
# Next Steps
* Investigate using Naga for "partial typed imports" (ex: `#import bevy_pbr::material::StandardMaterial`, which would import only the StandardMaterial struct)
* Implement `MaterialPlugin<T: Material>` for low-boilerplate custom material shaders
* Move shader import logic into the asset preprocessor once bevy gets support for that.
Fixes #3132
2021-11-18 03:45:02 +00:00
|
|
|
}
|
|
|
|
|
2021-11-22 23:16:36 +00:00
|
|
|
#[derive(Component)]
|
Shader Imports. Decouple Mesh logic from PBR (#3137)
## Shader Imports
This adds "whole file" shader imports. These come in two flavors:
### Asset Path Imports
```rust
// /assets/shaders/custom.wgsl
#import "shaders/custom_material.wgsl"
[[stage(fragment)]]
fn fragment() -> [[location(0)]] vec4<f32> {
return get_color();
}
```
```rust
// /assets/shaders/custom_material.wgsl
[[block]]
struct CustomMaterial {
color: vec4<f32>;
};
[[group(1), binding(0)]]
var<uniform> material: CustomMaterial;
```
### Custom Path Imports
Enables defining custom import paths. These are intended to be used by crates to export shader functionality:
```rust
// bevy_pbr2/src/render/pbr.wgsl
#import bevy_pbr::mesh_view_bind_group
#import bevy_pbr::mesh_bind_group
[[block]]
struct StandardMaterial {
base_color: vec4<f32>;
emissive: vec4<f32>;
perceptual_roughness: f32;
metallic: f32;
reflectance: f32;
flags: u32;
};
/* rest of PBR fragment shader here */
```
```rust
impl Plugin for MeshRenderPlugin {
fn build(&self, app: &mut bevy_app::App) {
let mut shaders = app.world.get_resource_mut::<Assets<Shader>>().unwrap();
shaders.set_untracked(
MESH_BIND_GROUP_HANDLE,
Shader::from_wgsl(include_str!("mesh_bind_group.wgsl"))
.with_import_path("bevy_pbr::mesh_bind_group"),
);
shaders.set_untracked(
MESH_VIEW_BIND_GROUP_HANDLE,
Shader::from_wgsl(include_str!("mesh_view_bind_group.wgsl"))
.with_import_path("bevy_pbr::mesh_view_bind_group"),
);
```
By convention these should use rust-style module paths that start with the crate name. Ultimately we might enforce this convention.
Note that this feature implements _run time_ import resolution. Ultimately we should move the import logic into an asset preprocessor once Bevy gets support for that.
## Decouple Mesh Logic from PBR Logic via MeshRenderPlugin
This breaks out mesh rendering code from PBR material code, which improves the legibility of the code, decouples mesh logic from PBR logic, and opens the door for a future `MaterialPlugin<T: Material>` that handles all of the pipeline setup for arbitrary shader materials.
## Removed `RenderAsset<Shader>` in favor of extracting shaders into RenderPipelineCache
This simplifies the shader import implementation and removes the need to pass around `RenderAssets<Shader>`.
## RenderCommands are now fallible
This allows us to cleanly handle pipelines+shaders not being ready yet. We can abort a render command early in these cases, preventing bevy from trying to bind group / do draw calls for pipelines that couldn't be bound. This could also be used in the future for things like "components not existing on entities yet".
# Next Steps
* Investigate using Naga for "partial typed imports" (ex: `#import bevy_pbr::material::StandardMaterial`, which would import only the StandardMaterial struct)
* Implement `MaterialPlugin<T: Material>` for low-boilerplate custom material shaders
* Move shader import logic into the asset preprocessor once bevy gets support for that.
Fixes #3132
2021-11-18 03:45:02 +00:00
|
|
|
pub struct MeshViewBindGroup {
|
|
|
|
pub value: BindGroup,
|
|
|
|
}
|
|
|
|
|
Clustered forward rendering (#3153)
# Objective
Implement clustered-forward rendering.
## Solution
~~FIXME - in the interest of keeping the merge train moving, I'm submitting this PR now before the description is ready. I want to add in some comments into the code with references for the various bits and pieces and I want to describe some of the key decisions I made here. I'll do that as soon as I can.~~ Anyone reviewing is welcome to add review comments where you want to know more about how something or other works.
* The summary of the technique is that the view frustum is divided into a grid of sub-volumes called clusters, point lights are tested against each of the clusters to see if they would affect that volume within the scene and if so, added to a list of lights affecting that cluster. Then when shading a fragment which is a point on the surface of a mesh within the scene, the point is mapped to a cluster and only the lights affecting that clusters are used in lighting calculations. This brings huge performance and scalability benefits as most of the time lights are placed so that there are not that many that overlap each other in terms of their sphere of influence, but there may be many distinct point lights visible in the scene. Doing all the lighting calculations for all visible lights in the scene for every pixel on the screen quickly becomes a performance limitation. Clustered forward rendering allows us to make an approximate list of lights that affect each pixel, indeed each surface in the scene (as it works along the view z axis too, unlike tiled/forward+).
* WebGL2 is a platform we want to support and it does not support storage buffers. Uniform buffer bindings are limited to a maximum of 16384 bytes per binding. I used bit shifting and masking to pack the cluster light lists and various indices into a uniform buffer and the 16kB limit is very likely the first bottleneck in scaling the number of lights in a scene at the moment if the lights can affect many clusters due to their range or proximity to the camera (there are a lot of clusters close to the camera, which is an area for improvement). We could store the information in textures instead of uniform buffers to remove this bottleneck though I don’t know if there are performance implications to reading from textures instead if uniform buffers.
* Because of the uniform buffer binding size limitations we can support a maximum of 256 lights with the current size of the PointLight struct
* The z-slicing method (i.e. the mapping from view space z to a depth slice which defines the near and far planes of a cluster) is using the Doom 2016 method. I need to add comments with references to this. It’s an exponential function that simplifies well for the purposes of optimising the fragment shader. xy grid divisions are regular in screen space.
* Some optimisation work was done on the allocation of lights to clusters, which involves intersection tests, and for this number of clusters and lights the system has insignificant cost using a fairly naïve algorithm. I think for more lights / finer-grained clusters we could use a BVH, but at some point it would be just much better to use compute shaders and storage buffers.
* Something else to note is that it is absolutely infeasible to use plain cube map point light shadow mapping for many lights. It does not scale in terms of performance nor memory usage. There are some interesting methods I saw discussed in reference material that I will add a link to which render and update shadow maps piece-wise, but they also need compute shaders to work well. Basically for now you need to sacrifice point light shadows for all but a handful of point lights if you don’t want to kill performance. I set the limit to 10 but that’s just what we had from before where 10 was the maximum number of point lights before this PR.
* I added a couple of debug visualisations behind a shader def that were useful for seeing performance impact of light distribution - I should make the debug mode configurable without modifying the shader code. One mode shows the number of lights affecting each cluster by tinting toward red for few lights or green for many lights (maxes out at 16, but not sure that’s a reasonable max). The other shows which cluster the surface at a fragment belongs to by tinting it with a randomish colour. This can help to understand deeper performance issues due to screen space tiles spanning multiple clusters in depth with divergent shader execution times.
Also, there are more things that could be done as improvements, and I will document those somewhere (I'm not sure where will be the best place... in a todo alongside the code, a GitHub issue, somewhere else?) but I think it works well enough and brings significant performance and scalability benefits that it's worth integrating already now and then iterating on.
* Calculate the light’s effective range based on its intensity and physical falloff and either just use this, or take the minimum of the user-supplied range and this. This would avoid unnecessary lighting calculations for clusters that cannot be affected. This would need to take into account HDR tone mapping as in my not-fully-understanding-the-details understanding, the threshold is relative to how bright the scene is.
* Improve the z-slicing to use a larger first slice.
* More gracefully handle the cluster light list uniform buffer binding size limitations by prioritising which lights are included (some heuristic for most significant like closest to the camera, brightest, affecting the most pixels, …)
* Switch to using a texture instead of uniform buffer
* Figure out the / a better story for shadows
I will also probably add an example that demonstrates some of the issues:
* What situations exhaust the space available in the uniform buffers
* Light range too large making lights affect many clusters and so exhausting the space for the lists of lights that affect clusters
* Light range set to be too small producing visible artifacts where clusters the light would physically affect are not affected by the light
* Perhaps some performance issues
* How many lights can be closely packed or affect large portions of the view before performance drops?
2021-12-09 03:08:54 +00:00
|
|
|
#[allow(clippy::too_many_arguments)]
|
Reorder render sets, refactor bevy_sprite to take advantage (#9236)
This is a continuation of this PR: #8062
# Objective
- Reorder render schedule sets to allow data preparation when phase item
order is known to support improved batching
- Part of the batching/instancing etc plan from here:
https://github.com/bevyengine/bevy/issues/89#issuecomment-1379249074
- The original idea came from @inodentry and proved to be a good one.
Thanks!
- Refactor `bevy_sprite` and `bevy_ui` to take advantage of the new
ordering
## Solution
- Move `Prepare` and `PrepareFlush` after `PhaseSortFlush`
- Add a `PrepareAssets` set that runs in parallel with other systems and
sets in the render schedule.
- Put prepare_assets systems in the `PrepareAssets` set
- If explicit dependencies are needed on Mesh or Material RenderAssets
then depend on the appropriate system.
- Add `ManageViews` and `ManageViewsFlush` sets between
`ExtractCommands` and Queue
- Move `queue_mesh*_bind_group` to the Prepare stage
- Rename them to `prepare_`
- Put systems that prepare resources (buffers, textures, etc.) into a
`PrepareResources` set inside `Prepare`
- Put the `prepare_..._bind_group` systems into a `PrepareBindGroup` set
after `PrepareResources`
- Move `prepare_lights` to the `ManageViews` set
- `prepare_lights` creates views and this must happen before `Queue`
- This system needs refactoring to stop handling all responsibilities
- Gather lights, sort, and create shadow map views. Store sorted light
entities in a resource
- Remove `BatchedPhaseItem`
- Replace `batch_range` with `batch_size` representing how many items to
skip after rendering the item or to skip the item entirely if
`batch_size` is 0.
- `queue_sprites` has been split into `queue_sprites` for queueing phase
items and `prepare_sprites` for batching after the `PhaseSort`
- `PhaseItem`s are still inserted in `queue_sprites`
- After sorting adjacent compatible sprite phase items are accumulated
into `SpriteBatch` components on the first entity of each batch,
containing a range of vertex indices. The associated `PhaseItem`'s
`batch_size` is updated appropriately.
- `SpriteBatch` items are then drawn skipping over the other items in
the batch based on the value in `batch_size`
- A very similar refactor was performed on `bevy_ui`
---
## Changelog
Changed:
- Reordered and reworked render app schedule sets. The main change is
that data is extracted, queued, sorted, and then prepared when the order
of data is known.
- Refactor `bevy_sprite` and `bevy_ui` to take advantage of the
reordering.
## Migration Guide
- Assets such as materials and meshes should now be created in
`PrepareAssets` e.g. `prepare_assets<Mesh>`
- Queueing entities to `RenderPhase`s continues to be done in `Queue`
e.g. `queue_sprites`
- Preparing resources (textures, buffers, etc.) should now be done in
`PrepareResources`, e.g. `prepare_prepass_textures`,
`prepare_mesh_uniforms`
- Prepare bind groups should now be done in `PrepareBindGroups` e.g.
`prepare_mesh_bind_group`
- Any batching or instancing can now be done in `Prepare` where the
order of the phase items is known e.g. `prepare_sprites`
## Next Steps
- Introduce some generic mechanism to ensure items that can be batched
are grouped in the phase item order, currently you could easily have
`[sprite at z 0, mesh at z 0, sprite at z 0]` preventing batching.
- Investigate improved orderings for building the MeshUniform buffer
- Implementing batching across the rest of bevy
---------
Co-authored-by: Robert Swain <robert.swain@gmail.com>
Co-authored-by: robtfm <50659922+robtfm@users.noreply.github.com>
2023-08-27 14:33:49 +00:00
|
|
|
pub fn prepare_mesh_view_bind_groups(
|
Shader Imports. Decouple Mesh logic from PBR (#3137)
## Shader Imports
This adds "whole file" shader imports. These come in two flavors:
### Asset Path Imports
```rust
// /assets/shaders/custom.wgsl
#import "shaders/custom_material.wgsl"
[[stage(fragment)]]
fn fragment() -> [[location(0)]] vec4<f32> {
return get_color();
}
```
```rust
// /assets/shaders/custom_material.wgsl
[[block]]
struct CustomMaterial {
color: vec4<f32>;
};
[[group(1), binding(0)]]
var<uniform> material: CustomMaterial;
```
### Custom Path Imports
Enables defining custom import paths. These are intended to be used by crates to export shader functionality:
```rust
// bevy_pbr2/src/render/pbr.wgsl
#import bevy_pbr::mesh_view_bind_group
#import bevy_pbr::mesh_bind_group
[[block]]
struct StandardMaterial {
base_color: vec4<f32>;
emissive: vec4<f32>;
perceptual_roughness: f32;
metallic: f32;
reflectance: f32;
flags: u32;
};
/* rest of PBR fragment shader here */
```
```rust
impl Plugin for MeshRenderPlugin {
fn build(&self, app: &mut bevy_app::App) {
let mut shaders = app.world.get_resource_mut::<Assets<Shader>>().unwrap();
shaders.set_untracked(
MESH_BIND_GROUP_HANDLE,
Shader::from_wgsl(include_str!("mesh_bind_group.wgsl"))
.with_import_path("bevy_pbr::mesh_bind_group"),
);
shaders.set_untracked(
MESH_VIEW_BIND_GROUP_HANDLE,
Shader::from_wgsl(include_str!("mesh_view_bind_group.wgsl"))
.with_import_path("bevy_pbr::mesh_view_bind_group"),
);
```
By convention these should use rust-style module paths that start with the crate name. Ultimately we might enforce this convention.
Note that this feature implements _run time_ import resolution. Ultimately we should move the import logic into an asset preprocessor once Bevy gets support for that.
## Decouple Mesh Logic from PBR Logic via MeshRenderPlugin
This breaks out mesh rendering code from PBR material code, which improves the legibility of the code, decouples mesh logic from PBR logic, and opens the door for a future `MaterialPlugin<T: Material>` that handles all of the pipeline setup for arbitrary shader materials.
## Removed `RenderAsset<Shader>` in favor of extracting shaders into RenderPipelineCache
This simplifies the shader import implementation and removes the need to pass around `RenderAssets<Shader>`.
## RenderCommands are now fallible
This allows us to cleanly handle pipelines+shaders not being ready yet. We can abort a render command early in these cases, preventing bevy from trying to bind group / do draw calls for pipelines that couldn't be bound. This could also be used in the future for things like "components not existing on entities yet".
# Next Steps
* Investigate using Naga for "partial typed imports" (ex: `#import bevy_pbr::material::StandardMaterial`, which would import only the StandardMaterial struct)
* Implement `MaterialPlugin<T: Material>` for low-boilerplate custom material shaders
* Move shader import logic into the asset preprocessor once bevy gets support for that.
Fixes #3132
2021-11-18 03:45:02 +00:00
|
|
|
mut commands: Commands,
|
|
|
|
render_device: Res<RenderDevice>,
|
|
|
|
mesh_pipeline: Res<MeshPipeline>,
|
2023-03-02 08:21:21 +00:00
|
|
|
shadow_samplers: Res<ShadowSamplers>,
|
Shader Imports. Decouple Mesh logic from PBR (#3137)
## Shader Imports
This adds "whole file" shader imports. These come in two flavors:
### Asset Path Imports
```rust
// /assets/shaders/custom.wgsl
#import "shaders/custom_material.wgsl"
[[stage(fragment)]]
fn fragment() -> [[location(0)]] vec4<f32> {
return get_color();
}
```
```rust
// /assets/shaders/custom_material.wgsl
[[block]]
struct CustomMaterial {
color: vec4<f32>;
};
[[group(1), binding(0)]]
var<uniform> material: CustomMaterial;
```
### Custom Path Imports
Enables defining custom import paths. These are intended to be used by crates to export shader functionality:
```rust
// bevy_pbr2/src/render/pbr.wgsl
#import bevy_pbr::mesh_view_bind_group
#import bevy_pbr::mesh_bind_group
[[block]]
struct StandardMaterial {
base_color: vec4<f32>;
emissive: vec4<f32>;
perceptual_roughness: f32;
metallic: f32;
reflectance: f32;
flags: u32;
};
/* rest of PBR fragment shader here */
```
```rust
impl Plugin for MeshRenderPlugin {
fn build(&self, app: &mut bevy_app::App) {
let mut shaders = app.world.get_resource_mut::<Assets<Shader>>().unwrap();
shaders.set_untracked(
MESH_BIND_GROUP_HANDLE,
Shader::from_wgsl(include_str!("mesh_bind_group.wgsl"))
.with_import_path("bevy_pbr::mesh_bind_group"),
);
shaders.set_untracked(
MESH_VIEW_BIND_GROUP_HANDLE,
Shader::from_wgsl(include_str!("mesh_view_bind_group.wgsl"))
.with_import_path("bevy_pbr::mesh_view_bind_group"),
);
```
By convention these should use rust-style module paths that start with the crate name. Ultimately we might enforce this convention.
Note that this feature implements _run time_ import resolution. Ultimately we should move the import logic into an asset preprocessor once Bevy gets support for that.
## Decouple Mesh Logic from PBR Logic via MeshRenderPlugin
This breaks out mesh rendering code from PBR material code, which improves the legibility of the code, decouples mesh logic from PBR logic, and opens the door for a future `MaterialPlugin<T: Material>` that handles all of the pipeline setup for arbitrary shader materials.
## Removed `RenderAsset<Shader>` in favor of extracting shaders into RenderPipelineCache
This simplifies the shader import implementation and removes the need to pass around `RenderAssets<Shader>`.
## RenderCommands are now fallible
This allows us to cleanly handle pipelines+shaders not being ready yet. We can abort a render command early in these cases, preventing bevy from trying to bind group / do draw calls for pipelines that couldn't be bound. This could also be used in the future for things like "components not existing on entities yet".
# Next Steps
* Investigate using Naga for "partial typed imports" (ex: `#import bevy_pbr::material::StandardMaterial`, which would import only the StandardMaterial struct)
* Implement `MaterialPlugin<T: Material>` for low-boilerplate custom material shaders
* Move shader import logic into the asset preprocessor once bevy gets support for that.
Fixes #3132
2021-11-18 03:45:02 +00:00
|
|
|
light_meta: Res<LightMeta>,
|
Clustered forward rendering (#3153)
# Objective
Implement clustered-forward rendering.
## Solution
~~FIXME - in the interest of keeping the merge train moving, I'm submitting this PR now before the description is ready. I want to add in some comments into the code with references for the various bits and pieces and I want to describe some of the key decisions I made here. I'll do that as soon as I can.~~ Anyone reviewing is welcome to add review comments where you want to know more about how something or other works.
* The summary of the technique is that the view frustum is divided into a grid of sub-volumes called clusters, point lights are tested against each of the clusters to see if they would affect that volume within the scene and if so, added to a list of lights affecting that cluster. Then when shading a fragment which is a point on the surface of a mesh within the scene, the point is mapped to a cluster and only the lights affecting that clusters are used in lighting calculations. This brings huge performance and scalability benefits as most of the time lights are placed so that there are not that many that overlap each other in terms of their sphere of influence, but there may be many distinct point lights visible in the scene. Doing all the lighting calculations for all visible lights in the scene for every pixel on the screen quickly becomes a performance limitation. Clustered forward rendering allows us to make an approximate list of lights that affect each pixel, indeed each surface in the scene (as it works along the view z axis too, unlike tiled/forward+).
* WebGL2 is a platform we want to support and it does not support storage buffers. Uniform buffer bindings are limited to a maximum of 16384 bytes per binding. I used bit shifting and masking to pack the cluster light lists and various indices into a uniform buffer and the 16kB limit is very likely the first bottleneck in scaling the number of lights in a scene at the moment if the lights can affect many clusters due to their range or proximity to the camera (there are a lot of clusters close to the camera, which is an area for improvement). We could store the information in textures instead of uniform buffers to remove this bottleneck though I don’t know if there are performance implications to reading from textures instead if uniform buffers.
* Because of the uniform buffer binding size limitations we can support a maximum of 256 lights with the current size of the PointLight struct
* The z-slicing method (i.e. the mapping from view space z to a depth slice which defines the near and far planes of a cluster) is using the Doom 2016 method. I need to add comments with references to this. It’s an exponential function that simplifies well for the purposes of optimising the fragment shader. xy grid divisions are regular in screen space.
* Some optimisation work was done on the allocation of lights to clusters, which involves intersection tests, and for this number of clusters and lights the system has insignificant cost using a fairly naïve algorithm. I think for more lights / finer-grained clusters we could use a BVH, but at some point it would be just much better to use compute shaders and storage buffers.
* Something else to note is that it is absolutely infeasible to use plain cube map point light shadow mapping for many lights. It does not scale in terms of performance nor memory usage. There are some interesting methods I saw discussed in reference material that I will add a link to which render and update shadow maps piece-wise, but they also need compute shaders to work well. Basically for now you need to sacrifice point light shadows for all but a handful of point lights if you don’t want to kill performance. I set the limit to 10 but that’s just what we had from before where 10 was the maximum number of point lights before this PR.
* I added a couple of debug visualisations behind a shader def that were useful for seeing performance impact of light distribution - I should make the debug mode configurable without modifying the shader code. One mode shows the number of lights affecting each cluster by tinting toward red for few lights or green for many lights (maxes out at 16, but not sure that’s a reasonable max). The other shows which cluster the surface at a fragment belongs to by tinting it with a randomish colour. This can help to understand deeper performance issues due to screen space tiles spanning multiple clusters in depth with divergent shader execution times.
Also, there are more things that could be done as improvements, and I will document those somewhere (I'm not sure where will be the best place... in a todo alongside the code, a GitHub issue, somewhere else?) but I think it works well enough and brings significant performance and scalability benefits that it's worth integrating already now and then iterating on.
* Calculate the light’s effective range based on its intensity and physical falloff and either just use this, or take the minimum of the user-supplied range and this. This would avoid unnecessary lighting calculations for clusters that cannot be affected. This would need to take into account HDR tone mapping as in my not-fully-understanding-the-details understanding, the threshold is relative to how bright the scene is.
* Improve the z-slicing to use a larger first slice.
* More gracefully handle the cluster light list uniform buffer binding size limitations by prioritising which lights are included (some heuristic for most significant like closest to the camera, brightest, affecting the most pixels, …)
* Switch to using a texture instead of uniform buffer
* Figure out the / a better story for shadows
I will also probably add an example that demonstrates some of the issues:
* What situations exhaust the space available in the uniform buffers
* Light range too large making lights affect many clusters and so exhausting the space for the lists of lights that affect clusters
* Light range set to be too small producing visible artifacts where clusters the light would physically affect are not affected by the light
* Perhaps some performance issues
* How many lights can be closely packed or affect large portions of the view before performance drops?
2021-12-09 03:08:54 +00:00
|
|
|
global_light_meta: Res<GlobalLightMeta>,
|
Add Distance and Atmospheric Fog support (#6412)
<img width="1392" alt="image" src="https://user-images.githubusercontent.com/418473/203873533-44c029af-13b7-4740-8ea3-af96bd5867c9.png">
<img width="1392" alt="image" src="https://user-images.githubusercontent.com/418473/203873549-36be7a23-b341-42a2-8a9f-ceea8ac7a2b8.png">
# Objective
- Add support for the “classic” distance fog effect, as well as a more advanced atmospheric fog effect.
## Solution
This PR:
- Introduces a new `FogSettings` component that controls distance fog per-camera.
- Adds support for three widely used “traditional” fog falloff modes: `Linear`, `Exponential` and `ExponentialSquared`, as well as a more advanced `Atmospheric` fog;
- Adds support for directional light influence over fog color;
- Extracts fog via `ExtractComponent`, then uses a prepare system that sets up a new dynamic uniform struct (`Fog`), similar to other mesh view types;
- Renders fog in PBR material shader, as a final adjustment to the `output_color`, after PBR is computed (but before tone mapping);
- Adds a new `StandardMaterial` flag to enable fog; (`fog_enabled`)
- Adds convenience methods for easier artistic control when creating non-linear fog types;
- Adds documentation around fog.
---
## Changelog
### Added
- Added support for distance-based fog effects for PBR materials, controllable per-camera via the new `FogSettings` component;
- Added `FogFalloff` enum for selecting between three widely used “traditional” fog falloff modes: `Linear`, `Exponential` and `ExponentialSquared`, as well as a more advanced `Atmospheric` fog;
2023-01-29 15:28:56 +00:00
|
|
|
fog_meta: Res<FogMeta>,
|
Shader Imports. Decouple Mesh logic from PBR (#3137)
## Shader Imports
This adds "whole file" shader imports. These come in two flavors:
### Asset Path Imports
```rust
// /assets/shaders/custom.wgsl
#import "shaders/custom_material.wgsl"
[[stage(fragment)]]
fn fragment() -> [[location(0)]] vec4<f32> {
return get_color();
}
```
```rust
// /assets/shaders/custom_material.wgsl
[[block]]
struct CustomMaterial {
color: vec4<f32>;
};
[[group(1), binding(0)]]
var<uniform> material: CustomMaterial;
```
### Custom Path Imports
Enables defining custom import paths. These are intended to be used by crates to export shader functionality:
```rust
// bevy_pbr2/src/render/pbr.wgsl
#import bevy_pbr::mesh_view_bind_group
#import bevy_pbr::mesh_bind_group
[[block]]
struct StandardMaterial {
base_color: vec4<f32>;
emissive: vec4<f32>;
perceptual_roughness: f32;
metallic: f32;
reflectance: f32;
flags: u32;
};
/* rest of PBR fragment shader here */
```
```rust
impl Plugin for MeshRenderPlugin {
fn build(&self, app: &mut bevy_app::App) {
let mut shaders = app.world.get_resource_mut::<Assets<Shader>>().unwrap();
shaders.set_untracked(
MESH_BIND_GROUP_HANDLE,
Shader::from_wgsl(include_str!("mesh_bind_group.wgsl"))
.with_import_path("bevy_pbr::mesh_bind_group"),
);
shaders.set_untracked(
MESH_VIEW_BIND_GROUP_HANDLE,
Shader::from_wgsl(include_str!("mesh_view_bind_group.wgsl"))
.with_import_path("bevy_pbr::mesh_view_bind_group"),
);
```
By convention these should use rust-style module paths that start with the crate name. Ultimately we might enforce this convention.
Note that this feature implements _run time_ import resolution. Ultimately we should move the import logic into an asset preprocessor once Bevy gets support for that.
## Decouple Mesh Logic from PBR Logic via MeshRenderPlugin
This breaks out mesh rendering code from PBR material code, which improves the legibility of the code, decouples mesh logic from PBR logic, and opens the door for a future `MaterialPlugin<T: Material>` that handles all of the pipeline setup for arbitrary shader materials.
## Removed `RenderAsset<Shader>` in favor of extracting shaders into RenderPipelineCache
This simplifies the shader import implementation and removes the need to pass around `RenderAssets<Shader>`.
## RenderCommands are now fallible
This allows us to cleanly handle pipelines+shaders not being ready yet. We can abort a render command early in these cases, preventing bevy from trying to bind group / do draw calls for pipelines that couldn't be bound. This could also be used in the future for things like "components not existing on entities yet".
# Next Steps
* Investigate using Naga for "partial typed imports" (ex: `#import bevy_pbr::material::StandardMaterial`, which would import only the StandardMaterial struct)
* Implement `MaterialPlugin<T: Material>` for low-boilerplate custom material shaders
* Move shader import logic into the asset preprocessor once bevy gets support for that.
Fixes #3132
2021-11-18 03:45:02 +00:00
|
|
|
view_uniforms: Res<ViewUniforms>,
|
2023-01-19 22:11:13 +00:00
|
|
|
views: Query<(
|
|
|
|
Entity,
|
|
|
|
&ViewShadowBindings,
|
|
|
|
&ViewClusterBindings,
|
Screen Space Ambient Occlusion (SSAO) MVP (#7402)
![image](https://github.com/bevyengine/bevy/assets/47158642/dbb62645-f639-4f2b-b84b-26fd915c186d)
# Objective
- Add Screen space ambient occlusion (SSAO). SSAO approximates
small-scale, local occlusion of _indirect_ diffuse light between
objects. SSAO does not apply to direct lighting, such as point or
directional lights.
- This darkens creases, e.g. on staircases, and gives nice contact
shadows where objects meet, giving entities a more "grounded" feel.
- Closes https://github.com/bevyengine/bevy/issues/3632.
## Solution
- Implement the GTAO algorithm.
-
https://www.activision.com/cdn/research/Practical_Real_Time_Strategies_for_Accurate_Indirect_Occlusion_NEW%20VERSION_COLOR.pdf
-
https://blog.selfshadow.com/publications/s2016-shading-course/activision/s2016_pbs_activision_occlusion.pdf
- Source code heavily based on [Intel's
XeGTAO](https://github.com/GameTechDev/XeGTAO/blob/0d177ce06bfa642f64d8af4de1197ad1bcb862d4/Source/Rendering/Shaders/XeGTAO.hlsli).
- Add an SSAO bevy example.
## Algorithm Overview
* Run a depth and normal prepass
* Create downscaled mips of the depth texture (preprocess_depths pass)
* GTAO pass - for each pixel, take several random samples from the
depth+normal buffers, reconstruct world position, raytrace in screen
space to estimate occlusion. Rather then doing completely random samples
on a hemisphere, you choose random _slices_ of the hemisphere, and then
can analytically compute the full occlusion of that slice. Also compute
edges based on depth differences here.
* Spatial denoise pass - bilateral blur, using edge detection to not
blur over edges. This is the final SSAO result.
* Main pass - if SSAO exists, sample the SSAO texture, and set occlusion
to be the minimum of ssao/material occlusion. This then feeds into the
rest of the PBR shader as normal.
---
## Future Improvements
- Maybe remove the low quality preset for now (too noisy)
- WebGPU fallback (see below)
- Faster depth->world position (see reverted code)
- Bent normals
- Try interleaved gradient noise or spatiotemporal blue noise
- Replace the spatial denoiser with a combined spatial+temporal denoiser
- Render at half resolution and use a bilateral upsample
- Better multibounce approximation
(https://drive.google.com/file/d/1SyagcEVplIm2KkRD3WQYSO9O0Iyi1hfy/view)
## Far-Future Performance Improvements
- F16 math (missing naga-wgsl support
https://github.com/gfx-rs/naga/issues/1884)
- Faster coordinate space conversion for normals
- Faster depth mipchain creation
(https://github.com/GPUOpen-Effects/FidelityFX-SPD) (wgpu/naga does not
currently support subgroup ops)
- Deinterleaved SSAO for better cache efficiency
(https://developer.nvidia.com/sites/default/files/akamai/gameworks/samples/DeinterleavedTexturing.pdf)
## Other Interesting Papers
- Visibility bitmask
(https://link.springer.com/article/10.1007/s00371-022-02703-y,
https://cdrinmatane.github.io/posts/cgspotlight-slides/)
- Screen space diffuse lighting
(https://github.com/Patapom/GodComplex/blob/master/Tests/TestHBIL/2018%20Mayaux%20-%20Horizon-Based%20Indirect%20Lighting%20(HBIL).pdf)
## Platform Support
* SSAO currently does not work on DirectX12 due to issues with wgpu and
naga:
* https://github.com/gfx-rs/wgpu/pull/3798
* https://github.com/gfx-rs/naga/pull/2353
* SSAO currently does not work on WebGPU because r16float is not a valid
storage texture format
https://gpuweb.github.io/gpuweb/wgsl/#storage-texel-formats. We can fix
this with a fallback to r32float.
---
## Changelog
- Added ScreenSpaceAmbientOcclusionSettings,
ScreenSpaceAmbientOcclusionQualityLevel, and
ScreenSpaceAmbientOcclusionBundle
---------
Co-authored-by: IceSentry <c.giguere42@gmail.com>
Co-authored-by: IceSentry <IceSentry@users.noreply.github.com>
Co-authored-by: Daniel Chia <danstryder@gmail.com>
Co-authored-by: Elabajaba <Elabajaba@users.noreply.github.com>
Co-authored-by: Robert Swain <robert.swain@gmail.com>
Co-authored-by: robtfm <50659922+robtfm@users.noreply.github.com>
Co-authored-by: Brandon Dyer <brandondyer64@gmail.com>
Co-authored-by: Edgar Geier <geieredgar@gmail.com>
Co-authored-by: Nicola Papale <nicopap@users.noreply.github.com>
Co-authored-by: Carter Anderson <mcanders1@gmail.com>
2023-06-18 21:05:55 +00:00
|
|
|
Option<&ScreenSpaceAmbientOcclusionTextures>,
|
2023-01-19 22:11:13 +00:00
|
|
|
Option<&ViewPrepassTextures>,
|
EnvironmentMapLight, BRDF Improvements (#7051)
(Before)
![image](https://user-images.githubusercontent.com/47158642/213946111-15ec758f-1f1d-443c-b196-1fdcd4ae49da.png)
(After)
![image](https://user-images.githubusercontent.com/47158642/217051179-67381e73-dd44-461b-a2c7-87b0440ef8de.png)
![image](https://user-images.githubusercontent.com/47158642/212492404-524e4ad3-7837-4ed4-8b20-2abc276aa8e8.png)
# Objective
- Improve lighting; especially reflections.
- Closes https://github.com/bevyengine/bevy/issues/4581.
## Solution
- Implement environment maps, providing better ambient light.
- Add microfacet multibounce approximation for specular highlights from Filament.
- Occlusion is no longer incorrectly applied to direct lighting. It now only applies to diffuse indirect light. Unsure if it's also supposed to apply to specular indirect light - the glTF specification just says "indirect light". In the case of ambient occlusion, for instance, that's usually only calculated as diffuse though. For now, I'm choosing to apply this just to indirect diffuse light, and not specular.
- Modified the PBR example to use an environment map, and have labels.
- Added `FallbackImageCubemap`.
## Implementation
- IBL technique references can be found in environment_map.wgsl.
- It's more accurate to use a LUT for the scale/bias. Filament has a good reference on generating this LUT. For now, I just used an analytic approximation.
- For now, environment maps must first be prefiltered outside of bevy using a 3rd party tool. See the `EnvironmentMap` documentation.
- Eventually, we should have our own prefiltering code, so that we can have dynamically changing environment maps, as well as let users drop in an HDR image and use asset preprocessing to create the needed textures using only bevy.
---
## Changelog
- Added an `EnvironmentMapLight` camera component that adds additional ambient light to a scene.
- StandardMaterials will now appear brighter and more saturated at high roughness, due to internal material changes. This is more physically correct.
- Fixed StandardMaterial occlusion being incorrectly applied to direct lighting.
- Added `FallbackImageCubemap`.
Co-authored-by: IceSentry <c.giguere42@gmail.com>
Co-authored-by: James Liu <contact@jamessliu.com>
Co-authored-by: Rob Parrett <robparrett@gmail.com>
2023-02-09 16:46:32 +00:00
|
|
|
Option<&EnvironmentMapLight>,
|
2023-02-19 20:38:13 +00:00
|
|
|
&Tonemapping,
|
2023-01-19 22:11:13 +00:00
|
|
|
)>,
|
EnvironmentMapLight, BRDF Improvements (#7051)
(Before)
![image](https://user-images.githubusercontent.com/47158642/213946111-15ec758f-1f1d-443c-b196-1fdcd4ae49da.png)
(After)
![image](https://user-images.githubusercontent.com/47158642/217051179-67381e73-dd44-461b-a2c7-87b0440ef8de.png)
![image](https://user-images.githubusercontent.com/47158642/212492404-524e4ad3-7837-4ed4-8b20-2abc276aa8e8.png)
# Objective
- Improve lighting; especially reflections.
- Closes https://github.com/bevyengine/bevy/issues/4581.
## Solution
- Implement environment maps, providing better ambient light.
- Add microfacet multibounce approximation for specular highlights from Filament.
- Occlusion is no longer incorrectly applied to direct lighting. It now only applies to diffuse indirect light. Unsure if it's also supposed to apply to specular indirect light - the glTF specification just says "indirect light". In the case of ambient occlusion, for instance, that's usually only calculated as diffuse though. For now, I'm choosing to apply this just to indirect diffuse light, and not specular.
- Modified the PBR example to use an environment map, and have labels.
- Added `FallbackImageCubemap`.
## Implementation
- IBL technique references can be found in environment_map.wgsl.
- It's more accurate to use a LUT for the scale/bias. Filament has a good reference on generating this LUT. For now, I just used an analytic approximation.
- For now, environment maps must first be prefiltered outside of bevy using a 3rd party tool. See the `EnvironmentMap` documentation.
- Eventually, we should have our own prefiltering code, so that we can have dynamically changing environment maps, as well as let users drop in an HDR image and use asset preprocessing to create the needed textures using only bevy.
---
## Changelog
- Added an `EnvironmentMapLight` camera component that adds additional ambient light to a scene.
- StandardMaterials will now appear brighter and more saturated at high roughness, due to internal material changes. This is more physically correct.
- Fixed StandardMaterial occlusion being incorrectly applied to direct lighting.
- Added `FallbackImageCubemap`.
Co-authored-by: IceSentry <c.giguere42@gmail.com>
Co-authored-by: James Liu <contact@jamessliu.com>
Co-authored-by: Rob Parrett <robparrett@gmail.com>
2023-02-09 16:46:32 +00:00
|
|
|
images: Res<RenderAssets<Image>>,
|
2023-01-19 22:11:13 +00:00
|
|
|
mut fallback_images: FallbackImagesMsaa,
|
|
|
|
mut fallback_depths: FallbackImagesDepth,
|
EnvironmentMapLight, BRDF Improvements (#7051)
(Before)
![image](https://user-images.githubusercontent.com/47158642/213946111-15ec758f-1f1d-443c-b196-1fdcd4ae49da.png)
(After)
![image](https://user-images.githubusercontent.com/47158642/217051179-67381e73-dd44-461b-a2c7-87b0440ef8de.png)
![image](https://user-images.githubusercontent.com/47158642/212492404-524e4ad3-7837-4ed4-8b20-2abc276aa8e8.png)
# Objective
- Improve lighting; especially reflections.
- Closes https://github.com/bevyengine/bevy/issues/4581.
## Solution
- Implement environment maps, providing better ambient light.
- Add microfacet multibounce approximation for specular highlights from Filament.
- Occlusion is no longer incorrectly applied to direct lighting. It now only applies to diffuse indirect light. Unsure if it's also supposed to apply to specular indirect light - the glTF specification just says "indirect light". In the case of ambient occlusion, for instance, that's usually only calculated as diffuse though. For now, I'm choosing to apply this just to indirect diffuse light, and not specular.
- Modified the PBR example to use an environment map, and have labels.
- Added `FallbackImageCubemap`.
## Implementation
- IBL technique references can be found in environment_map.wgsl.
- It's more accurate to use a LUT for the scale/bias. Filament has a good reference on generating this LUT. For now, I just used an analytic approximation.
- For now, environment maps must first be prefiltered outside of bevy using a 3rd party tool. See the `EnvironmentMap` documentation.
- Eventually, we should have our own prefiltering code, so that we can have dynamically changing environment maps, as well as let users drop in an HDR image and use asset preprocessing to create the needed textures using only bevy.
---
## Changelog
- Added an `EnvironmentMapLight` camera component that adds additional ambient light to a scene.
- StandardMaterials will now appear brighter and more saturated at high roughness, due to internal material changes. This is more physically correct.
- Fixed StandardMaterial occlusion being incorrectly applied to direct lighting.
- Added `FallbackImageCubemap`.
Co-authored-by: IceSentry <c.giguere42@gmail.com>
Co-authored-by: James Liu <contact@jamessliu.com>
Co-authored-by: Rob Parrett <robparrett@gmail.com>
2023-02-09 16:46:32 +00:00
|
|
|
fallback_cubemap: Res<FallbackImageCubemap>,
|
2023-01-19 22:11:13 +00:00
|
|
|
msaa: Res<Msaa>,
|
2022-09-28 04:20:27 +00:00
|
|
|
globals_buffer: Res<GlobalsBuffer>,
|
2023-02-19 20:38:13 +00:00
|
|
|
tonemapping_luts: Res<TonemappingLuts>,
|
Shader Imports. Decouple Mesh logic from PBR (#3137)
## Shader Imports
This adds "whole file" shader imports. These come in two flavors:
### Asset Path Imports
```rust
// /assets/shaders/custom.wgsl
#import "shaders/custom_material.wgsl"
[[stage(fragment)]]
fn fragment() -> [[location(0)]] vec4<f32> {
return get_color();
}
```
```rust
// /assets/shaders/custom_material.wgsl
[[block]]
struct CustomMaterial {
color: vec4<f32>;
};
[[group(1), binding(0)]]
var<uniform> material: CustomMaterial;
```
### Custom Path Imports
Enables defining custom import paths. These are intended to be used by crates to export shader functionality:
```rust
// bevy_pbr2/src/render/pbr.wgsl
#import bevy_pbr::mesh_view_bind_group
#import bevy_pbr::mesh_bind_group
[[block]]
struct StandardMaterial {
base_color: vec4<f32>;
emissive: vec4<f32>;
perceptual_roughness: f32;
metallic: f32;
reflectance: f32;
flags: u32;
};
/* rest of PBR fragment shader here */
```
```rust
impl Plugin for MeshRenderPlugin {
fn build(&self, app: &mut bevy_app::App) {
let mut shaders = app.world.get_resource_mut::<Assets<Shader>>().unwrap();
shaders.set_untracked(
MESH_BIND_GROUP_HANDLE,
Shader::from_wgsl(include_str!("mesh_bind_group.wgsl"))
.with_import_path("bevy_pbr::mesh_bind_group"),
);
shaders.set_untracked(
MESH_VIEW_BIND_GROUP_HANDLE,
Shader::from_wgsl(include_str!("mesh_view_bind_group.wgsl"))
.with_import_path("bevy_pbr::mesh_view_bind_group"),
);
```
By convention these should use rust-style module paths that start with the crate name. Ultimately we might enforce this convention.
Note that this feature implements _run time_ import resolution. Ultimately we should move the import logic into an asset preprocessor once Bevy gets support for that.
## Decouple Mesh Logic from PBR Logic via MeshRenderPlugin
This breaks out mesh rendering code from PBR material code, which improves the legibility of the code, decouples mesh logic from PBR logic, and opens the door for a future `MaterialPlugin<T: Material>` that handles all of the pipeline setup for arbitrary shader materials.
## Removed `RenderAsset<Shader>` in favor of extracting shaders into RenderPipelineCache
This simplifies the shader import implementation and removes the need to pass around `RenderAssets<Shader>`.
## RenderCommands are now fallible
This allows us to cleanly handle pipelines+shaders not being ready yet. We can abort a render command early in these cases, preventing bevy from trying to bind group / do draw calls for pipelines that couldn't be bound. This could also be used in the future for things like "components not existing on entities yet".
# Next Steps
* Investigate using Naga for "partial typed imports" (ex: `#import bevy_pbr::material::StandardMaterial`, which would import only the StandardMaterial struct)
* Implement `MaterialPlugin<T: Material>` for low-boilerplate custom material shaders
* Move shader import logic into the asset preprocessor once bevy gets support for that.
Fixes #3132
2021-11-18 03:45:02 +00:00
|
|
|
) {
|
Add Distance and Atmospheric Fog support (#6412)
<img width="1392" alt="image" src="https://user-images.githubusercontent.com/418473/203873533-44c029af-13b7-4740-8ea3-af96bd5867c9.png">
<img width="1392" alt="image" src="https://user-images.githubusercontent.com/418473/203873549-36be7a23-b341-42a2-8a9f-ceea8ac7a2b8.png">
# Objective
- Add support for the “classic” distance fog effect, as well as a more advanced atmospheric fog effect.
## Solution
This PR:
- Introduces a new `FogSettings` component that controls distance fog per-camera.
- Adds support for three widely used “traditional” fog falloff modes: `Linear`, `Exponential` and `ExponentialSquared`, as well as a more advanced `Atmospheric` fog;
- Adds support for directional light influence over fog color;
- Extracts fog via `ExtractComponent`, then uses a prepare system that sets up a new dynamic uniform struct (`Fog`), similar to other mesh view types;
- Renders fog in PBR material shader, as a final adjustment to the `output_color`, after PBR is computed (but before tone mapping);
- Adds a new `StandardMaterial` flag to enable fog; (`fog_enabled`)
- Adds convenience methods for easier artistic control when creating non-linear fog types;
- Adds documentation around fog.
---
## Changelog
### Added
- Added support for distance-based fog effects for PBR materials, controllable per-camera via the new `FogSettings` component;
- Added `FogFalloff` enum for selecting between three widely used “traditional” fog falloff modes: `Linear`, `Exponential` and `ExponentialSquared`, as well as a more advanced `Atmospheric` fog;
2023-01-29 15:28:56 +00:00
|
|
|
if let (
|
|
|
|
Some(view_binding),
|
|
|
|
Some(light_binding),
|
|
|
|
Some(point_light_binding),
|
|
|
|
Some(globals),
|
|
|
|
Some(fog_binding),
|
|
|
|
) = (
|
Shader Imports. Decouple Mesh logic from PBR (#3137)
## Shader Imports
This adds "whole file" shader imports. These come in two flavors:
### Asset Path Imports
```rust
// /assets/shaders/custom.wgsl
#import "shaders/custom_material.wgsl"
[[stage(fragment)]]
fn fragment() -> [[location(0)]] vec4<f32> {
return get_color();
}
```
```rust
// /assets/shaders/custom_material.wgsl
[[block]]
struct CustomMaterial {
color: vec4<f32>;
};
[[group(1), binding(0)]]
var<uniform> material: CustomMaterial;
```
### Custom Path Imports
Enables defining custom import paths. These are intended to be used by crates to export shader functionality:
```rust
// bevy_pbr2/src/render/pbr.wgsl
#import bevy_pbr::mesh_view_bind_group
#import bevy_pbr::mesh_bind_group
[[block]]
struct StandardMaterial {
base_color: vec4<f32>;
emissive: vec4<f32>;
perceptual_roughness: f32;
metallic: f32;
reflectance: f32;
flags: u32;
};
/* rest of PBR fragment shader here */
```
```rust
impl Plugin for MeshRenderPlugin {
fn build(&self, app: &mut bevy_app::App) {
let mut shaders = app.world.get_resource_mut::<Assets<Shader>>().unwrap();
shaders.set_untracked(
MESH_BIND_GROUP_HANDLE,
Shader::from_wgsl(include_str!("mesh_bind_group.wgsl"))
.with_import_path("bevy_pbr::mesh_bind_group"),
);
shaders.set_untracked(
MESH_VIEW_BIND_GROUP_HANDLE,
Shader::from_wgsl(include_str!("mesh_view_bind_group.wgsl"))
.with_import_path("bevy_pbr::mesh_view_bind_group"),
);
```
By convention these should use rust-style module paths that start with the crate name. Ultimately we might enforce this convention.
Note that this feature implements _run time_ import resolution. Ultimately we should move the import logic into an asset preprocessor once Bevy gets support for that.
## Decouple Mesh Logic from PBR Logic via MeshRenderPlugin
This breaks out mesh rendering code from PBR material code, which improves the legibility of the code, decouples mesh logic from PBR logic, and opens the door for a future `MaterialPlugin<T: Material>` that handles all of the pipeline setup for arbitrary shader materials.
## Removed `RenderAsset<Shader>` in favor of extracting shaders into RenderPipelineCache
This simplifies the shader import implementation and removes the need to pass around `RenderAssets<Shader>`.
## RenderCommands are now fallible
This allows us to cleanly handle pipelines+shaders not being ready yet. We can abort a render command early in these cases, preventing bevy from trying to bind group / do draw calls for pipelines that couldn't be bound. This could also be used in the future for things like "components not existing on entities yet".
# Next Steps
* Investigate using Naga for "partial typed imports" (ex: `#import bevy_pbr::material::StandardMaterial`, which would import only the StandardMaterial struct)
* Implement `MaterialPlugin<T: Material>` for low-boilerplate custom material shaders
* Move shader import logic into the asset preprocessor once bevy gets support for that.
Fixes #3132
2021-11-18 03:45:02 +00:00
|
|
|
view_uniforms.uniforms.binding(),
|
|
|
|
light_meta.view_gpu_lights.binding(),
|
Clustered forward rendering (#3153)
# Objective
Implement clustered-forward rendering.
## Solution
~~FIXME - in the interest of keeping the merge train moving, I'm submitting this PR now before the description is ready. I want to add in some comments into the code with references for the various bits and pieces and I want to describe some of the key decisions I made here. I'll do that as soon as I can.~~ Anyone reviewing is welcome to add review comments where you want to know more about how something or other works.
* The summary of the technique is that the view frustum is divided into a grid of sub-volumes called clusters, point lights are tested against each of the clusters to see if they would affect that volume within the scene and if so, added to a list of lights affecting that cluster. Then when shading a fragment which is a point on the surface of a mesh within the scene, the point is mapped to a cluster and only the lights affecting that clusters are used in lighting calculations. This brings huge performance and scalability benefits as most of the time lights are placed so that there are not that many that overlap each other in terms of their sphere of influence, but there may be many distinct point lights visible in the scene. Doing all the lighting calculations for all visible lights in the scene for every pixel on the screen quickly becomes a performance limitation. Clustered forward rendering allows us to make an approximate list of lights that affect each pixel, indeed each surface in the scene (as it works along the view z axis too, unlike tiled/forward+).
* WebGL2 is a platform we want to support and it does not support storage buffers. Uniform buffer bindings are limited to a maximum of 16384 bytes per binding. I used bit shifting and masking to pack the cluster light lists and various indices into a uniform buffer and the 16kB limit is very likely the first bottleneck in scaling the number of lights in a scene at the moment if the lights can affect many clusters due to their range or proximity to the camera (there are a lot of clusters close to the camera, which is an area for improvement). We could store the information in textures instead of uniform buffers to remove this bottleneck though I don’t know if there are performance implications to reading from textures instead if uniform buffers.
* Because of the uniform buffer binding size limitations we can support a maximum of 256 lights with the current size of the PointLight struct
* The z-slicing method (i.e. the mapping from view space z to a depth slice which defines the near and far planes of a cluster) is using the Doom 2016 method. I need to add comments with references to this. It’s an exponential function that simplifies well for the purposes of optimising the fragment shader. xy grid divisions are regular in screen space.
* Some optimisation work was done on the allocation of lights to clusters, which involves intersection tests, and for this number of clusters and lights the system has insignificant cost using a fairly naïve algorithm. I think for more lights / finer-grained clusters we could use a BVH, but at some point it would be just much better to use compute shaders and storage buffers.
* Something else to note is that it is absolutely infeasible to use plain cube map point light shadow mapping for many lights. It does not scale in terms of performance nor memory usage. There are some interesting methods I saw discussed in reference material that I will add a link to which render and update shadow maps piece-wise, but they also need compute shaders to work well. Basically for now you need to sacrifice point light shadows for all but a handful of point lights if you don’t want to kill performance. I set the limit to 10 but that’s just what we had from before where 10 was the maximum number of point lights before this PR.
* I added a couple of debug visualisations behind a shader def that were useful for seeing performance impact of light distribution - I should make the debug mode configurable without modifying the shader code. One mode shows the number of lights affecting each cluster by tinting toward red for few lights or green for many lights (maxes out at 16, but not sure that’s a reasonable max). The other shows which cluster the surface at a fragment belongs to by tinting it with a randomish colour. This can help to understand deeper performance issues due to screen space tiles spanning multiple clusters in depth with divergent shader execution times.
Also, there are more things that could be done as improvements, and I will document those somewhere (I'm not sure where will be the best place... in a todo alongside the code, a GitHub issue, somewhere else?) but I think it works well enough and brings significant performance and scalability benefits that it's worth integrating already now and then iterating on.
* Calculate the light’s effective range based on its intensity and physical falloff and either just use this, or take the minimum of the user-supplied range and this. This would avoid unnecessary lighting calculations for clusters that cannot be affected. This would need to take into account HDR tone mapping as in my not-fully-understanding-the-details understanding, the threshold is relative to how bright the scene is.
* Improve the z-slicing to use a larger first slice.
* More gracefully handle the cluster light list uniform buffer binding size limitations by prioritising which lights are included (some heuristic for most significant like closest to the camera, brightest, affecting the most pixels, …)
* Switch to using a texture instead of uniform buffer
* Figure out the / a better story for shadows
I will also probably add an example that demonstrates some of the issues:
* What situations exhaust the space available in the uniform buffers
* Light range too large making lights affect many clusters and so exhausting the space for the lists of lights that affect clusters
* Light range set to be too small producing visible artifacts where clusters the light would physically affect are not affected by the light
* Perhaps some performance issues
* How many lights can be closely packed or affect large portions of the view before performance drops?
2021-12-09 03:08:54 +00:00
|
|
|
global_light_meta.gpu_point_lights.binding(),
|
2022-09-28 04:20:27 +00:00
|
|
|
globals_buffer.buffer.binding(),
|
Add Distance and Atmospheric Fog support (#6412)
<img width="1392" alt="image" src="https://user-images.githubusercontent.com/418473/203873533-44c029af-13b7-4740-8ea3-af96bd5867c9.png">
<img width="1392" alt="image" src="https://user-images.githubusercontent.com/418473/203873549-36be7a23-b341-42a2-8a9f-ceea8ac7a2b8.png">
# Objective
- Add support for the “classic” distance fog effect, as well as a more advanced atmospheric fog effect.
## Solution
This PR:
- Introduces a new `FogSettings` component that controls distance fog per-camera.
- Adds support for three widely used “traditional” fog falloff modes: `Linear`, `Exponential` and `ExponentialSquared`, as well as a more advanced `Atmospheric` fog;
- Adds support for directional light influence over fog color;
- Extracts fog via `ExtractComponent`, then uses a prepare system that sets up a new dynamic uniform struct (`Fog`), similar to other mesh view types;
- Renders fog in PBR material shader, as a final adjustment to the `output_color`, after PBR is computed (but before tone mapping);
- Adds a new `StandardMaterial` flag to enable fog; (`fog_enabled`)
- Adds convenience methods for easier artistic control when creating non-linear fog types;
- Adds documentation around fog.
---
## Changelog
### Added
- Added support for distance-based fog effects for PBR materials, controllable per-camera via the new `FogSettings` component;
- Added `FogFalloff` enum for selecting between three widely used “traditional” fog falloff modes: `Linear`, `Exponential` and `ExponentialSquared`, as well as a more advanced `Atmospheric` fog;
2023-01-29 15:28:56 +00:00
|
|
|
fog_meta.gpu_fogs.binding(),
|
Shader Imports. Decouple Mesh logic from PBR (#3137)
## Shader Imports
This adds "whole file" shader imports. These come in two flavors:
### Asset Path Imports
```rust
// /assets/shaders/custom.wgsl
#import "shaders/custom_material.wgsl"
[[stage(fragment)]]
fn fragment() -> [[location(0)]] vec4<f32> {
return get_color();
}
```
```rust
// /assets/shaders/custom_material.wgsl
[[block]]
struct CustomMaterial {
color: vec4<f32>;
};
[[group(1), binding(0)]]
var<uniform> material: CustomMaterial;
```
### Custom Path Imports
Enables defining custom import paths. These are intended to be used by crates to export shader functionality:
```rust
// bevy_pbr2/src/render/pbr.wgsl
#import bevy_pbr::mesh_view_bind_group
#import bevy_pbr::mesh_bind_group
[[block]]
struct StandardMaterial {
base_color: vec4<f32>;
emissive: vec4<f32>;
perceptual_roughness: f32;
metallic: f32;
reflectance: f32;
flags: u32;
};
/* rest of PBR fragment shader here */
```
```rust
impl Plugin for MeshRenderPlugin {
fn build(&self, app: &mut bevy_app::App) {
let mut shaders = app.world.get_resource_mut::<Assets<Shader>>().unwrap();
shaders.set_untracked(
MESH_BIND_GROUP_HANDLE,
Shader::from_wgsl(include_str!("mesh_bind_group.wgsl"))
.with_import_path("bevy_pbr::mesh_bind_group"),
);
shaders.set_untracked(
MESH_VIEW_BIND_GROUP_HANDLE,
Shader::from_wgsl(include_str!("mesh_view_bind_group.wgsl"))
.with_import_path("bevy_pbr::mesh_view_bind_group"),
);
```
By convention these should use rust-style module paths that start with the crate name. Ultimately we might enforce this convention.
Note that this feature implements _run time_ import resolution. Ultimately we should move the import logic into an asset preprocessor once Bevy gets support for that.
## Decouple Mesh Logic from PBR Logic via MeshRenderPlugin
This breaks out mesh rendering code from PBR material code, which improves the legibility of the code, decouples mesh logic from PBR logic, and opens the door for a future `MaterialPlugin<T: Material>` that handles all of the pipeline setup for arbitrary shader materials.
## Removed `RenderAsset<Shader>` in favor of extracting shaders into RenderPipelineCache
This simplifies the shader import implementation and removes the need to pass around `RenderAssets<Shader>`.
## RenderCommands are now fallible
This allows us to cleanly handle pipelines+shaders not being ready yet. We can abort a render command early in these cases, preventing bevy from trying to bind group / do draw calls for pipelines that couldn't be bound. This could also be used in the future for things like "components not existing on entities yet".
# Next Steps
* Investigate using Naga for "partial typed imports" (ex: `#import bevy_pbr::material::StandardMaterial`, which would import only the StandardMaterial struct)
* Implement `MaterialPlugin<T: Material>` for low-boilerplate custom material shaders
* Move shader import logic into the asset preprocessor once bevy gets support for that.
Fixes #3132
2021-11-18 03:45:02 +00:00
|
|
|
) {
|
EnvironmentMapLight, BRDF Improvements (#7051)
(Before)
![image](https://user-images.githubusercontent.com/47158642/213946111-15ec758f-1f1d-443c-b196-1fdcd4ae49da.png)
(After)
![image](https://user-images.githubusercontent.com/47158642/217051179-67381e73-dd44-461b-a2c7-87b0440ef8de.png)
![image](https://user-images.githubusercontent.com/47158642/212492404-524e4ad3-7837-4ed4-8b20-2abc276aa8e8.png)
# Objective
- Improve lighting; especially reflections.
- Closes https://github.com/bevyengine/bevy/issues/4581.
## Solution
- Implement environment maps, providing better ambient light.
- Add microfacet multibounce approximation for specular highlights from Filament.
- Occlusion is no longer incorrectly applied to direct lighting. It now only applies to diffuse indirect light. Unsure if it's also supposed to apply to specular indirect light - the glTF specification just says "indirect light". In the case of ambient occlusion, for instance, that's usually only calculated as diffuse though. For now, I'm choosing to apply this just to indirect diffuse light, and not specular.
- Modified the PBR example to use an environment map, and have labels.
- Added `FallbackImageCubemap`.
## Implementation
- IBL technique references can be found in environment_map.wgsl.
- It's more accurate to use a LUT for the scale/bias. Filament has a good reference on generating this LUT. For now, I just used an analytic approximation.
- For now, environment maps must first be prefiltered outside of bevy using a 3rd party tool. See the `EnvironmentMap` documentation.
- Eventually, we should have our own prefiltering code, so that we can have dynamically changing environment maps, as well as let users drop in an HDR image and use asset preprocessing to create the needed textures using only bevy.
---
## Changelog
- Added an `EnvironmentMapLight` camera component that adds additional ambient light to a scene.
- StandardMaterials will now appear brighter and more saturated at high roughness, due to internal material changes. This is more physically correct.
- Fixed StandardMaterial occlusion being incorrectly applied to direct lighting.
- Added `FallbackImageCubemap`.
Co-authored-by: IceSentry <c.giguere42@gmail.com>
Co-authored-by: James Liu <contact@jamessliu.com>
Co-authored-by: Rob Parrett <robparrett@gmail.com>
2023-02-09 16:46:32 +00:00
|
|
|
for (
|
|
|
|
entity,
|
|
|
|
view_shadow_bindings,
|
|
|
|
view_cluster_bindings,
|
Screen Space Ambient Occlusion (SSAO) MVP (#7402)
![image](https://github.com/bevyengine/bevy/assets/47158642/dbb62645-f639-4f2b-b84b-26fd915c186d)
# Objective
- Add Screen space ambient occlusion (SSAO). SSAO approximates
small-scale, local occlusion of _indirect_ diffuse light between
objects. SSAO does not apply to direct lighting, such as point or
directional lights.
- This darkens creases, e.g. on staircases, and gives nice contact
shadows where objects meet, giving entities a more "grounded" feel.
- Closes https://github.com/bevyengine/bevy/issues/3632.
## Solution
- Implement the GTAO algorithm.
-
https://www.activision.com/cdn/research/Practical_Real_Time_Strategies_for_Accurate_Indirect_Occlusion_NEW%20VERSION_COLOR.pdf
-
https://blog.selfshadow.com/publications/s2016-shading-course/activision/s2016_pbs_activision_occlusion.pdf
- Source code heavily based on [Intel's
XeGTAO](https://github.com/GameTechDev/XeGTAO/blob/0d177ce06bfa642f64d8af4de1197ad1bcb862d4/Source/Rendering/Shaders/XeGTAO.hlsli).
- Add an SSAO bevy example.
## Algorithm Overview
* Run a depth and normal prepass
* Create downscaled mips of the depth texture (preprocess_depths pass)
* GTAO pass - for each pixel, take several random samples from the
depth+normal buffers, reconstruct world position, raytrace in screen
space to estimate occlusion. Rather then doing completely random samples
on a hemisphere, you choose random _slices_ of the hemisphere, and then
can analytically compute the full occlusion of that slice. Also compute
edges based on depth differences here.
* Spatial denoise pass - bilateral blur, using edge detection to not
blur over edges. This is the final SSAO result.
* Main pass - if SSAO exists, sample the SSAO texture, and set occlusion
to be the minimum of ssao/material occlusion. This then feeds into the
rest of the PBR shader as normal.
---
## Future Improvements
- Maybe remove the low quality preset for now (too noisy)
- WebGPU fallback (see below)
- Faster depth->world position (see reverted code)
- Bent normals
- Try interleaved gradient noise or spatiotemporal blue noise
- Replace the spatial denoiser with a combined spatial+temporal denoiser
- Render at half resolution and use a bilateral upsample
- Better multibounce approximation
(https://drive.google.com/file/d/1SyagcEVplIm2KkRD3WQYSO9O0Iyi1hfy/view)
## Far-Future Performance Improvements
- F16 math (missing naga-wgsl support
https://github.com/gfx-rs/naga/issues/1884)
- Faster coordinate space conversion for normals
- Faster depth mipchain creation
(https://github.com/GPUOpen-Effects/FidelityFX-SPD) (wgpu/naga does not
currently support subgroup ops)
- Deinterleaved SSAO for better cache efficiency
(https://developer.nvidia.com/sites/default/files/akamai/gameworks/samples/DeinterleavedTexturing.pdf)
## Other Interesting Papers
- Visibility bitmask
(https://link.springer.com/article/10.1007/s00371-022-02703-y,
https://cdrinmatane.github.io/posts/cgspotlight-slides/)
- Screen space diffuse lighting
(https://github.com/Patapom/GodComplex/blob/master/Tests/TestHBIL/2018%20Mayaux%20-%20Horizon-Based%20Indirect%20Lighting%20(HBIL).pdf)
## Platform Support
* SSAO currently does not work on DirectX12 due to issues with wgpu and
naga:
* https://github.com/gfx-rs/wgpu/pull/3798
* https://github.com/gfx-rs/naga/pull/2353
* SSAO currently does not work on WebGPU because r16float is not a valid
storage texture format
https://gpuweb.github.io/gpuweb/wgsl/#storage-texel-formats. We can fix
this with a fallback to r32float.
---
## Changelog
- Added ScreenSpaceAmbientOcclusionSettings,
ScreenSpaceAmbientOcclusionQualityLevel, and
ScreenSpaceAmbientOcclusionBundle
---------
Co-authored-by: IceSentry <c.giguere42@gmail.com>
Co-authored-by: IceSentry <IceSentry@users.noreply.github.com>
Co-authored-by: Daniel Chia <danstryder@gmail.com>
Co-authored-by: Elabajaba <Elabajaba@users.noreply.github.com>
Co-authored-by: Robert Swain <robert.swain@gmail.com>
Co-authored-by: robtfm <50659922+robtfm@users.noreply.github.com>
Co-authored-by: Brandon Dyer <brandondyer64@gmail.com>
Co-authored-by: Edgar Geier <geieredgar@gmail.com>
Co-authored-by: Nicola Papale <nicopap@users.noreply.github.com>
Co-authored-by: Carter Anderson <mcanders1@gmail.com>
2023-06-18 21:05:55 +00:00
|
|
|
ssao_textures,
|
EnvironmentMapLight, BRDF Improvements (#7051)
(Before)
![image](https://user-images.githubusercontent.com/47158642/213946111-15ec758f-1f1d-443c-b196-1fdcd4ae49da.png)
(After)
![image](https://user-images.githubusercontent.com/47158642/217051179-67381e73-dd44-461b-a2c7-87b0440ef8de.png)
![image](https://user-images.githubusercontent.com/47158642/212492404-524e4ad3-7837-4ed4-8b20-2abc276aa8e8.png)
# Objective
- Improve lighting; especially reflections.
- Closes https://github.com/bevyengine/bevy/issues/4581.
## Solution
- Implement environment maps, providing better ambient light.
- Add microfacet multibounce approximation for specular highlights from Filament.
- Occlusion is no longer incorrectly applied to direct lighting. It now only applies to diffuse indirect light. Unsure if it's also supposed to apply to specular indirect light - the glTF specification just says "indirect light". In the case of ambient occlusion, for instance, that's usually only calculated as diffuse though. For now, I'm choosing to apply this just to indirect diffuse light, and not specular.
- Modified the PBR example to use an environment map, and have labels.
- Added `FallbackImageCubemap`.
## Implementation
- IBL technique references can be found in environment_map.wgsl.
- It's more accurate to use a LUT for the scale/bias. Filament has a good reference on generating this LUT. For now, I just used an analytic approximation.
- For now, environment maps must first be prefiltered outside of bevy using a 3rd party tool. See the `EnvironmentMap` documentation.
- Eventually, we should have our own prefiltering code, so that we can have dynamically changing environment maps, as well as let users drop in an HDR image and use asset preprocessing to create the needed textures using only bevy.
---
## Changelog
- Added an `EnvironmentMapLight` camera component that adds additional ambient light to a scene.
- StandardMaterials will now appear brighter and more saturated at high roughness, due to internal material changes. This is more physically correct.
- Fixed StandardMaterial occlusion being incorrectly applied to direct lighting.
- Added `FallbackImageCubemap`.
Co-authored-by: IceSentry <c.giguere42@gmail.com>
Co-authored-by: James Liu <contact@jamessliu.com>
Co-authored-by: Rob Parrett <robparrett@gmail.com>
2023-02-09 16:46:32 +00:00
|
|
|
prepass_textures,
|
|
|
|
environment_map,
|
2023-02-19 20:38:13 +00:00
|
|
|
tonemapping,
|
EnvironmentMapLight, BRDF Improvements (#7051)
(Before)
![image](https://user-images.githubusercontent.com/47158642/213946111-15ec758f-1f1d-443c-b196-1fdcd4ae49da.png)
(After)
![image](https://user-images.githubusercontent.com/47158642/217051179-67381e73-dd44-461b-a2c7-87b0440ef8de.png)
![image](https://user-images.githubusercontent.com/47158642/212492404-524e4ad3-7837-4ed4-8b20-2abc276aa8e8.png)
# Objective
- Improve lighting; especially reflections.
- Closes https://github.com/bevyengine/bevy/issues/4581.
## Solution
- Implement environment maps, providing better ambient light.
- Add microfacet multibounce approximation for specular highlights from Filament.
- Occlusion is no longer incorrectly applied to direct lighting. It now only applies to diffuse indirect light. Unsure if it's also supposed to apply to specular indirect light - the glTF specification just says "indirect light". In the case of ambient occlusion, for instance, that's usually only calculated as diffuse though. For now, I'm choosing to apply this just to indirect diffuse light, and not specular.
- Modified the PBR example to use an environment map, and have labels.
- Added `FallbackImageCubemap`.
## Implementation
- IBL technique references can be found in environment_map.wgsl.
- It's more accurate to use a LUT for the scale/bias. Filament has a good reference on generating this LUT. For now, I just used an analytic approximation.
- For now, environment maps must first be prefiltered outside of bevy using a 3rd party tool. See the `EnvironmentMap` documentation.
- Eventually, we should have our own prefiltering code, so that we can have dynamically changing environment maps, as well as let users drop in an HDR image and use asset preprocessing to create the needed textures using only bevy.
---
## Changelog
- Added an `EnvironmentMapLight` camera component that adds additional ambient light to a scene.
- StandardMaterials will now appear brighter and more saturated at high roughness, due to internal material changes. This is more physically correct.
- Fixed StandardMaterial occlusion being incorrectly applied to direct lighting.
- Added `FallbackImageCubemap`.
Co-authored-by: IceSentry <c.giguere42@gmail.com>
Co-authored-by: James Liu <contact@jamessliu.com>
Co-authored-by: Rob Parrett <robparrett@gmail.com>
2023-02-09 16:46:32 +00:00
|
|
|
) in &views
|
|
|
|
{
|
Screen Space Ambient Occlusion (SSAO) MVP (#7402)
![image](https://github.com/bevyengine/bevy/assets/47158642/dbb62645-f639-4f2b-b84b-26fd915c186d)
# Objective
- Add Screen space ambient occlusion (SSAO). SSAO approximates
small-scale, local occlusion of _indirect_ diffuse light between
objects. SSAO does not apply to direct lighting, such as point or
directional lights.
- This darkens creases, e.g. on staircases, and gives nice contact
shadows where objects meet, giving entities a more "grounded" feel.
- Closes https://github.com/bevyengine/bevy/issues/3632.
## Solution
- Implement the GTAO algorithm.
-
https://www.activision.com/cdn/research/Practical_Real_Time_Strategies_for_Accurate_Indirect_Occlusion_NEW%20VERSION_COLOR.pdf
-
https://blog.selfshadow.com/publications/s2016-shading-course/activision/s2016_pbs_activision_occlusion.pdf
- Source code heavily based on [Intel's
XeGTAO](https://github.com/GameTechDev/XeGTAO/blob/0d177ce06bfa642f64d8af4de1197ad1bcb862d4/Source/Rendering/Shaders/XeGTAO.hlsli).
- Add an SSAO bevy example.
## Algorithm Overview
* Run a depth and normal prepass
* Create downscaled mips of the depth texture (preprocess_depths pass)
* GTAO pass - for each pixel, take several random samples from the
depth+normal buffers, reconstruct world position, raytrace in screen
space to estimate occlusion. Rather then doing completely random samples
on a hemisphere, you choose random _slices_ of the hemisphere, and then
can analytically compute the full occlusion of that slice. Also compute
edges based on depth differences here.
* Spatial denoise pass - bilateral blur, using edge detection to not
blur over edges. This is the final SSAO result.
* Main pass - if SSAO exists, sample the SSAO texture, and set occlusion
to be the minimum of ssao/material occlusion. This then feeds into the
rest of the PBR shader as normal.
---
## Future Improvements
- Maybe remove the low quality preset for now (too noisy)
- WebGPU fallback (see below)
- Faster depth->world position (see reverted code)
- Bent normals
- Try interleaved gradient noise or spatiotemporal blue noise
- Replace the spatial denoiser with a combined spatial+temporal denoiser
- Render at half resolution and use a bilateral upsample
- Better multibounce approximation
(https://drive.google.com/file/d/1SyagcEVplIm2KkRD3WQYSO9O0Iyi1hfy/view)
## Far-Future Performance Improvements
- F16 math (missing naga-wgsl support
https://github.com/gfx-rs/naga/issues/1884)
- Faster coordinate space conversion for normals
- Faster depth mipchain creation
(https://github.com/GPUOpen-Effects/FidelityFX-SPD) (wgpu/naga does not
currently support subgroup ops)
- Deinterleaved SSAO for better cache efficiency
(https://developer.nvidia.com/sites/default/files/akamai/gameworks/samples/DeinterleavedTexturing.pdf)
## Other Interesting Papers
- Visibility bitmask
(https://link.springer.com/article/10.1007/s00371-022-02703-y,
https://cdrinmatane.github.io/posts/cgspotlight-slides/)
- Screen space diffuse lighting
(https://github.com/Patapom/GodComplex/blob/master/Tests/TestHBIL/2018%20Mayaux%20-%20Horizon-Based%20Indirect%20Lighting%20(HBIL).pdf)
## Platform Support
* SSAO currently does not work on DirectX12 due to issues with wgpu and
naga:
* https://github.com/gfx-rs/wgpu/pull/3798
* https://github.com/gfx-rs/naga/pull/2353
* SSAO currently does not work on WebGPU because r16float is not a valid
storage texture format
https://gpuweb.github.io/gpuweb/wgsl/#storage-texel-formats. We can fix
this with a fallback to r32float.
---
## Changelog
- Added ScreenSpaceAmbientOcclusionSettings,
ScreenSpaceAmbientOcclusionQualityLevel, and
ScreenSpaceAmbientOcclusionBundle
---------
Co-authored-by: IceSentry <c.giguere42@gmail.com>
Co-authored-by: IceSentry <IceSentry@users.noreply.github.com>
Co-authored-by: Daniel Chia <danstryder@gmail.com>
Co-authored-by: Elabajaba <Elabajaba@users.noreply.github.com>
Co-authored-by: Robert Swain <robert.swain@gmail.com>
Co-authored-by: robtfm <50659922+robtfm@users.noreply.github.com>
Co-authored-by: Brandon Dyer <brandondyer64@gmail.com>
Co-authored-by: Edgar Geier <geieredgar@gmail.com>
Co-authored-by: Nicola Papale <nicopap@users.noreply.github.com>
Co-authored-by: Carter Anderson <mcanders1@gmail.com>
2023-06-18 21:05:55 +00:00
|
|
|
let fallback_ssao = fallback_images
|
|
|
|
.image_for_samplecount(1)
|
|
|
|
.texture_view
|
|
|
|
.clone();
|
|
|
|
|
2023-01-20 14:25:21 +00:00
|
|
|
let layout = if msaa.samples() > 1 {
|
2023-01-19 22:11:13 +00:00
|
|
|
&mesh_pipeline.view_layout_multisampled
|
|
|
|
} else {
|
|
|
|
&mesh_pipeline.view_layout
|
|
|
|
};
|
|
|
|
|
|
|
|
let mut entries = vec![
|
|
|
|
BindGroupEntry {
|
|
|
|
binding: 0,
|
|
|
|
resource: view_binding.clone(),
|
|
|
|
},
|
|
|
|
BindGroupEntry {
|
|
|
|
binding: 1,
|
|
|
|
resource: light_binding.clone(),
|
|
|
|
},
|
|
|
|
BindGroupEntry {
|
|
|
|
binding: 2,
|
|
|
|
resource: BindingResource::TextureView(
|
|
|
|
&view_shadow_bindings.point_light_depth_texture_view,
|
|
|
|
),
|
|
|
|
},
|
|
|
|
BindGroupEntry {
|
|
|
|
binding: 3,
|
2023-03-02 08:21:21 +00:00
|
|
|
resource: BindingResource::Sampler(&shadow_samplers.point_light_sampler),
|
2023-01-19 22:11:13 +00:00
|
|
|
},
|
|
|
|
BindGroupEntry {
|
|
|
|
binding: 4,
|
|
|
|
resource: BindingResource::TextureView(
|
|
|
|
&view_shadow_bindings.directional_light_depth_texture_view,
|
|
|
|
),
|
|
|
|
},
|
|
|
|
BindGroupEntry {
|
|
|
|
binding: 5,
|
2023-03-02 08:21:21 +00:00
|
|
|
resource: BindingResource::Sampler(&shadow_samplers.directional_light_sampler),
|
2023-01-19 22:11:13 +00:00
|
|
|
},
|
|
|
|
BindGroupEntry {
|
|
|
|
binding: 6,
|
|
|
|
resource: point_light_binding.clone(),
|
|
|
|
},
|
|
|
|
BindGroupEntry {
|
|
|
|
binding: 7,
|
|
|
|
resource: view_cluster_bindings.light_index_lists_binding().unwrap(),
|
|
|
|
},
|
|
|
|
BindGroupEntry {
|
|
|
|
binding: 8,
|
|
|
|
resource: view_cluster_bindings.offsets_and_counts_binding().unwrap(),
|
|
|
|
},
|
|
|
|
BindGroupEntry {
|
|
|
|
binding: 9,
|
|
|
|
resource: globals.clone(),
|
|
|
|
},
|
Add Distance and Atmospheric Fog support (#6412)
<img width="1392" alt="image" src="https://user-images.githubusercontent.com/418473/203873533-44c029af-13b7-4740-8ea3-af96bd5867c9.png">
<img width="1392" alt="image" src="https://user-images.githubusercontent.com/418473/203873549-36be7a23-b341-42a2-8a9f-ceea8ac7a2b8.png">
# Objective
- Add support for the “classic” distance fog effect, as well as a more advanced atmospheric fog effect.
## Solution
This PR:
- Introduces a new `FogSettings` component that controls distance fog per-camera.
- Adds support for three widely used “traditional” fog falloff modes: `Linear`, `Exponential` and `ExponentialSquared`, as well as a more advanced `Atmospheric` fog;
- Adds support for directional light influence over fog color;
- Extracts fog via `ExtractComponent`, then uses a prepare system that sets up a new dynamic uniform struct (`Fog`), similar to other mesh view types;
- Renders fog in PBR material shader, as a final adjustment to the `output_color`, after PBR is computed (but before tone mapping);
- Adds a new `StandardMaterial` flag to enable fog; (`fog_enabled`)
- Adds convenience methods for easier artistic control when creating non-linear fog types;
- Adds documentation around fog.
---
## Changelog
### Added
- Added support for distance-based fog effects for PBR materials, controllable per-camera via the new `FogSettings` component;
- Added `FogFalloff` enum for selecting between three widely used “traditional” fog falloff modes: `Linear`, `Exponential` and `ExponentialSquared`, as well as a more advanced `Atmospheric` fog;
2023-01-29 15:28:56 +00:00
|
|
|
BindGroupEntry {
|
|
|
|
binding: 10,
|
|
|
|
resource: fog_binding.clone(),
|
|
|
|
},
|
Screen Space Ambient Occlusion (SSAO) MVP (#7402)
![image](https://github.com/bevyengine/bevy/assets/47158642/dbb62645-f639-4f2b-b84b-26fd915c186d)
# Objective
- Add Screen space ambient occlusion (SSAO). SSAO approximates
small-scale, local occlusion of _indirect_ diffuse light between
objects. SSAO does not apply to direct lighting, such as point or
directional lights.
- This darkens creases, e.g. on staircases, and gives nice contact
shadows where objects meet, giving entities a more "grounded" feel.
- Closes https://github.com/bevyengine/bevy/issues/3632.
## Solution
- Implement the GTAO algorithm.
-
https://www.activision.com/cdn/research/Practical_Real_Time_Strategies_for_Accurate_Indirect_Occlusion_NEW%20VERSION_COLOR.pdf
-
https://blog.selfshadow.com/publications/s2016-shading-course/activision/s2016_pbs_activision_occlusion.pdf
- Source code heavily based on [Intel's
XeGTAO](https://github.com/GameTechDev/XeGTAO/blob/0d177ce06bfa642f64d8af4de1197ad1bcb862d4/Source/Rendering/Shaders/XeGTAO.hlsli).
- Add an SSAO bevy example.
## Algorithm Overview
* Run a depth and normal prepass
* Create downscaled mips of the depth texture (preprocess_depths pass)
* GTAO pass - for each pixel, take several random samples from the
depth+normal buffers, reconstruct world position, raytrace in screen
space to estimate occlusion. Rather then doing completely random samples
on a hemisphere, you choose random _slices_ of the hemisphere, and then
can analytically compute the full occlusion of that slice. Also compute
edges based on depth differences here.
* Spatial denoise pass - bilateral blur, using edge detection to not
blur over edges. This is the final SSAO result.
* Main pass - if SSAO exists, sample the SSAO texture, and set occlusion
to be the minimum of ssao/material occlusion. This then feeds into the
rest of the PBR shader as normal.
---
## Future Improvements
- Maybe remove the low quality preset for now (too noisy)
- WebGPU fallback (see below)
- Faster depth->world position (see reverted code)
- Bent normals
- Try interleaved gradient noise or spatiotemporal blue noise
- Replace the spatial denoiser with a combined spatial+temporal denoiser
- Render at half resolution and use a bilateral upsample
- Better multibounce approximation
(https://drive.google.com/file/d/1SyagcEVplIm2KkRD3WQYSO9O0Iyi1hfy/view)
## Far-Future Performance Improvements
- F16 math (missing naga-wgsl support
https://github.com/gfx-rs/naga/issues/1884)
- Faster coordinate space conversion for normals
- Faster depth mipchain creation
(https://github.com/GPUOpen-Effects/FidelityFX-SPD) (wgpu/naga does not
currently support subgroup ops)
- Deinterleaved SSAO for better cache efficiency
(https://developer.nvidia.com/sites/default/files/akamai/gameworks/samples/DeinterleavedTexturing.pdf)
## Other Interesting Papers
- Visibility bitmask
(https://link.springer.com/article/10.1007/s00371-022-02703-y,
https://cdrinmatane.github.io/posts/cgspotlight-slides/)
- Screen space diffuse lighting
(https://github.com/Patapom/GodComplex/blob/master/Tests/TestHBIL/2018%20Mayaux%20-%20Horizon-Based%20Indirect%20Lighting%20(HBIL).pdf)
## Platform Support
* SSAO currently does not work on DirectX12 due to issues with wgpu and
naga:
* https://github.com/gfx-rs/wgpu/pull/3798
* https://github.com/gfx-rs/naga/pull/2353
* SSAO currently does not work on WebGPU because r16float is not a valid
storage texture format
https://gpuweb.github.io/gpuweb/wgsl/#storage-texel-formats. We can fix
this with a fallback to r32float.
---
## Changelog
- Added ScreenSpaceAmbientOcclusionSettings,
ScreenSpaceAmbientOcclusionQualityLevel, and
ScreenSpaceAmbientOcclusionBundle
---------
Co-authored-by: IceSentry <c.giguere42@gmail.com>
Co-authored-by: IceSentry <IceSentry@users.noreply.github.com>
Co-authored-by: Daniel Chia <danstryder@gmail.com>
Co-authored-by: Elabajaba <Elabajaba@users.noreply.github.com>
Co-authored-by: Robert Swain <robert.swain@gmail.com>
Co-authored-by: robtfm <50659922+robtfm@users.noreply.github.com>
Co-authored-by: Brandon Dyer <brandondyer64@gmail.com>
Co-authored-by: Edgar Geier <geieredgar@gmail.com>
Co-authored-by: Nicola Papale <nicopap@users.noreply.github.com>
Co-authored-by: Carter Anderson <mcanders1@gmail.com>
2023-06-18 21:05:55 +00:00
|
|
|
BindGroupEntry {
|
|
|
|
binding: 11,
|
|
|
|
resource: BindingResource::TextureView(
|
|
|
|
ssao_textures
|
|
|
|
.map(|t| &t.screen_space_ambient_occlusion_texture.default_view)
|
|
|
|
.unwrap_or(&fallback_ssao),
|
|
|
|
),
|
|
|
|
},
|
2023-01-19 22:11:13 +00:00
|
|
|
];
|
|
|
|
|
EnvironmentMapLight, BRDF Improvements (#7051)
(Before)
![image](https://user-images.githubusercontent.com/47158642/213946111-15ec758f-1f1d-443c-b196-1fdcd4ae49da.png)
(After)
![image](https://user-images.githubusercontent.com/47158642/217051179-67381e73-dd44-461b-a2c7-87b0440ef8de.png)
![image](https://user-images.githubusercontent.com/47158642/212492404-524e4ad3-7837-4ed4-8b20-2abc276aa8e8.png)
# Objective
- Improve lighting; especially reflections.
- Closes https://github.com/bevyengine/bevy/issues/4581.
## Solution
- Implement environment maps, providing better ambient light.
- Add microfacet multibounce approximation for specular highlights from Filament.
- Occlusion is no longer incorrectly applied to direct lighting. It now only applies to diffuse indirect light. Unsure if it's also supposed to apply to specular indirect light - the glTF specification just says "indirect light". In the case of ambient occlusion, for instance, that's usually only calculated as diffuse though. For now, I'm choosing to apply this just to indirect diffuse light, and not specular.
- Modified the PBR example to use an environment map, and have labels.
- Added `FallbackImageCubemap`.
## Implementation
- IBL technique references can be found in environment_map.wgsl.
- It's more accurate to use a LUT for the scale/bias. Filament has a good reference on generating this LUT. For now, I just used an analytic approximation.
- For now, environment maps must first be prefiltered outside of bevy using a 3rd party tool. See the `EnvironmentMap` documentation.
- Eventually, we should have our own prefiltering code, so that we can have dynamically changing environment maps, as well as let users drop in an HDR image and use asset preprocessing to create the needed textures using only bevy.
---
## Changelog
- Added an `EnvironmentMapLight` camera component that adds additional ambient light to a scene.
- StandardMaterials will now appear brighter and more saturated at high roughness, due to internal material changes. This is more physically correct.
- Fixed StandardMaterial occlusion being incorrectly applied to direct lighting.
- Added `FallbackImageCubemap`.
Co-authored-by: IceSentry <c.giguere42@gmail.com>
Co-authored-by: James Liu <contact@jamessliu.com>
Co-authored-by: Rob Parrett <robparrett@gmail.com>
2023-02-09 16:46:32 +00:00
|
|
|
let env_map = environment_map::get_bindings(
|
|
|
|
environment_map,
|
|
|
|
&images,
|
|
|
|
&fallback_cubemap,
|
Screen Space Ambient Occlusion (SSAO) MVP (#7402)
![image](https://github.com/bevyengine/bevy/assets/47158642/dbb62645-f639-4f2b-b84b-26fd915c186d)
# Objective
- Add Screen space ambient occlusion (SSAO). SSAO approximates
small-scale, local occlusion of _indirect_ diffuse light between
objects. SSAO does not apply to direct lighting, such as point or
directional lights.
- This darkens creases, e.g. on staircases, and gives nice contact
shadows where objects meet, giving entities a more "grounded" feel.
- Closes https://github.com/bevyengine/bevy/issues/3632.
## Solution
- Implement the GTAO algorithm.
-
https://www.activision.com/cdn/research/Practical_Real_Time_Strategies_for_Accurate_Indirect_Occlusion_NEW%20VERSION_COLOR.pdf
-
https://blog.selfshadow.com/publications/s2016-shading-course/activision/s2016_pbs_activision_occlusion.pdf
- Source code heavily based on [Intel's
XeGTAO](https://github.com/GameTechDev/XeGTAO/blob/0d177ce06bfa642f64d8af4de1197ad1bcb862d4/Source/Rendering/Shaders/XeGTAO.hlsli).
- Add an SSAO bevy example.
## Algorithm Overview
* Run a depth and normal prepass
* Create downscaled mips of the depth texture (preprocess_depths pass)
* GTAO pass - for each pixel, take several random samples from the
depth+normal buffers, reconstruct world position, raytrace in screen
space to estimate occlusion. Rather then doing completely random samples
on a hemisphere, you choose random _slices_ of the hemisphere, and then
can analytically compute the full occlusion of that slice. Also compute
edges based on depth differences here.
* Spatial denoise pass - bilateral blur, using edge detection to not
blur over edges. This is the final SSAO result.
* Main pass - if SSAO exists, sample the SSAO texture, and set occlusion
to be the minimum of ssao/material occlusion. This then feeds into the
rest of the PBR shader as normal.
---
## Future Improvements
- Maybe remove the low quality preset for now (too noisy)
- WebGPU fallback (see below)
- Faster depth->world position (see reverted code)
- Bent normals
- Try interleaved gradient noise or spatiotemporal blue noise
- Replace the spatial denoiser with a combined spatial+temporal denoiser
- Render at half resolution and use a bilateral upsample
- Better multibounce approximation
(https://drive.google.com/file/d/1SyagcEVplIm2KkRD3WQYSO9O0Iyi1hfy/view)
## Far-Future Performance Improvements
- F16 math (missing naga-wgsl support
https://github.com/gfx-rs/naga/issues/1884)
- Faster coordinate space conversion for normals
- Faster depth mipchain creation
(https://github.com/GPUOpen-Effects/FidelityFX-SPD) (wgpu/naga does not
currently support subgroup ops)
- Deinterleaved SSAO for better cache efficiency
(https://developer.nvidia.com/sites/default/files/akamai/gameworks/samples/DeinterleavedTexturing.pdf)
## Other Interesting Papers
- Visibility bitmask
(https://link.springer.com/article/10.1007/s00371-022-02703-y,
https://cdrinmatane.github.io/posts/cgspotlight-slides/)
- Screen space diffuse lighting
(https://github.com/Patapom/GodComplex/blob/master/Tests/TestHBIL/2018%20Mayaux%20-%20Horizon-Based%20Indirect%20Lighting%20(HBIL).pdf)
## Platform Support
* SSAO currently does not work on DirectX12 due to issues with wgpu and
naga:
* https://github.com/gfx-rs/wgpu/pull/3798
* https://github.com/gfx-rs/naga/pull/2353
* SSAO currently does not work on WebGPU because r16float is not a valid
storage texture format
https://gpuweb.github.io/gpuweb/wgsl/#storage-texel-formats. We can fix
this with a fallback to r32float.
---
## Changelog
- Added ScreenSpaceAmbientOcclusionSettings,
ScreenSpaceAmbientOcclusionQualityLevel, and
ScreenSpaceAmbientOcclusionBundle
---------
Co-authored-by: IceSentry <c.giguere42@gmail.com>
Co-authored-by: IceSentry <IceSentry@users.noreply.github.com>
Co-authored-by: Daniel Chia <danstryder@gmail.com>
Co-authored-by: Elabajaba <Elabajaba@users.noreply.github.com>
Co-authored-by: Robert Swain <robert.swain@gmail.com>
Co-authored-by: robtfm <50659922+robtfm@users.noreply.github.com>
Co-authored-by: Brandon Dyer <brandondyer64@gmail.com>
Co-authored-by: Edgar Geier <geieredgar@gmail.com>
Co-authored-by: Nicola Papale <nicopap@users.noreply.github.com>
Co-authored-by: Carter Anderson <mcanders1@gmail.com>
2023-06-18 21:05:55 +00:00
|
|
|
[12, 13, 14],
|
EnvironmentMapLight, BRDF Improvements (#7051)
(Before)
![image](https://user-images.githubusercontent.com/47158642/213946111-15ec758f-1f1d-443c-b196-1fdcd4ae49da.png)
(After)
![image](https://user-images.githubusercontent.com/47158642/217051179-67381e73-dd44-461b-a2c7-87b0440ef8de.png)
![image](https://user-images.githubusercontent.com/47158642/212492404-524e4ad3-7837-4ed4-8b20-2abc276aa8e8.png)
# Objective
- Improve lighting; especially reflections.
- Closes https://github.com/bevyengine/bevy/issues/4581.
## Solution
- Implement environment maps, providing better ambient light.
- Add microfacet multibounce approximation for specular highlights from Filament.
- Occlusion is no longer incorrectly applied to direct lighting. It now only applies to diffuse indirect light. Unsure if it's also supposed to apply to specular indirect light - the glTF specification just says "indirect light". In the case of ambient occlusion, for instance, that's usually only calculated as diffuse though. For now, I'm choosing to apply this just to indirect diffuse light, and not specular.
- Modified the PBR example to use an environment map, and have labels.
- Added `FallbackImageCubemap`.
## Implementation
- IBL technique references can be found in environment_map.wgsl.
- It's more accurate to use a LUT for the scale/bias. Filament has a good reference on generating this LUT. For now, I just used an analytic approximation.
- For now, environment maps must first be prefiltered outside of bevy using a 3rd party tool. See the `EnvironmentMap` documentation.
- Eventually, we should have our own prefiltering code, so that we can have dynamically changing environment maps, as well as let users drop in an HDR image and use asset preprocessing to create the needed textures using only bevy.
---
## Changelog
- Added an `EnvironmentMapLight` camera component that adds additional ambient light to a scene.
- StandardMaterials will now appear brighter and more saturated at high roughness, due to internal material changes. This is more physically correct.
- Fixed StandardMaterial occlusion being incorrectly applied to direct lighting.
- Added `FallbackImageCubemap`.
Co-authored-by: IceSentry <c.giguere42@gmail.com>
Co-authored-by: James Liu <contact@jamessliu.com>
Co-authored-by: Rob Parrett <robparrett@gmail.com>
2023-02-09 16:46:32 +00:00
|
|
|
);
|
|
|
|
entries.extend_from_slice(&env_map);
|
|
|
|
|
2023-02-19 20:38:13 +00:00
|
|
|
let tonemapping_luts =
|
Screen Space Ambient Occlusion (SSAO) MVP (#7402)
![image](https://github.com/bevyengine/bevy/assets/47158642/dbb62645-f639-4f2b-b84b-26fd915c186d)
# Objective
- Add Screen space ambient occlusion (SSAO). SSAO approximates
small-scale, local occlusion of _indirect_ diffuse light between
objects. SSAO does not apply to direct lighting, such as point or
directional lights.
- This darkens creases, e.g. on staircases, and gives nice contact
shadows where objects meet, giving entities a more "grounded" feel.
- Closes https://github.com/bevyengine/bevy/issues/3632.
## Solution
- Implement the GTAO algorithm.
-
https://www.activision.com/cdn/research/Practical_Real_Time_Strategies_for_Accurate_Indirect_Occlusion_NEW%20VERSION_COLOR.pdf
-
https://blog.selfshadow.com/publications/s2016-shading-course/activision/s2016_pbs_activision_occlusion.pdf
- Source code heavily based on [Intel's
XeGTAO](https://github.com/GameTechDev/XeGTAO/blob/0d177ce06bfa642f64d8af4de1197ad1bcb862d4/Source/Rendering/Shaders/XeGTAO.hlsli).
- Add an SSAO bevy example.
## Algorithm Overview
* Run a depth and normal prepass
* Create downscaled mips of the depth texture (preprocess_depths pass)
* GTAO pass - for each pixel, take several random samples from the
depth+normal buffers, reconstruct world position, raytrace in screen
space to estimate occlusion. Rather then doing completely random samples
on a hemisphere, you choose random _slices_ of the hemisphere, and then
can analytically compute the full occlusion of that slice. Also compute
edges based on depth differences here.
* Spatial denoise pass - bilateral blur, using edge detection to not
blur over edges. This is the final SSAO result.
* Main pass - if SSAO exists, sample the SSAO texture, and set occlusion
to be the minimum of ssao/material occlusion. This then feeds into the
rest of the PBR shader as normal.
---
## Future Improvements
- Maybe remove the low quality preset for now (too noisy)
- WebGPU fallback (see below)
- Faster depth->world position (see reverted code)
- Bent normals
- Try interleaved gradient noise or spatiotemporal blue noise
- Replace the spatial denoiser with a combined spatial+temporal denoiser
- Render at half resolution and use a bilateral upsample
- Better multibounce approximation
(https://drive.google.com/file/d/1SyagcEVplIm2KkRD3WQYSO9O0Iyi1hfy/view)
## Far-Future Performance Improvements
- F16 math (missing naga-wgsl support
https://github.com/gfx-rs/naga/issues/1884)
- Faster coordinate space conversion for normals
- Faster depth mipchain creation
(https://github.com/GPUOpen-Effects/FidelityFX-SPD) (wgpu/naga does not
currently support subgroup ops)
- Deinterleaved SSAO for better cache efficiency
(https://developer.nvidia.com/sites/default/files/akamai/gameworks/samples/DeinterleavedTexturing.pdf)
## Other Interesting Papers
- Visibility bitmask
(https://link.springer.com/article/10.1007/s00371-022-02703-y,
https://cdrinmatane.github.io/posts/cgspotlight-slides/)
- Screen space diffuse lighting
(https://github.com/Patapom/GodComplex/blob/master/Tests/TestHBIL/2018%20Mayaux%20-%20Horizon-Based%20Indirect%20Lighting%20(HBIL).pdf)
## Platform Support
* SSAO currently does not work on DirectX12 due to issues with wgpu and
naga:
* https://github.com/gfx-rs/wgpu/pull/3798
* https://github.com/gfx-rs/naga/pull/2353
* SSAO currently does not work on WebGPU because r16float is not a valid
storage texture format
https://gpuweb.github.io/gpuweb/wgsl/#storage-texel-formats. We can fix
this with a fallback to r32float.
---
## Changelog
- Added ScreenSpaceAmbientOcclusionSettings,
ScreenSpaceAmbientOcclusionQualityLevel, and
ScreenSpaceAmbientOcclusionBundle
---------
Co-authored-by: IceSentry <c.giguere42@gmail.com>
Co-authored-by: IceSentry <IceSentry@users.noreply.github.com>
Co-authored-by: Daniel Chia <danstryder@gmail.com>
Co-authored-by: Elabajaba <Elabajaba@users.noreply.github.com>
Co-authored-by: Robert Swain <robert.swain@gmail.com>
Co-authored-by: robtfm <50659922+robtfm@users.noreply.github.com>
Co-authored-by: Brandon Dyer <brandondyer64@gmail.com>
Co-authored-by: Edgar Geier <geieredgar@gmail.com>
Co-authored-by: Nicola Papale <nicopap@users.noreply.github.com>
Co-authored-by: Carter Anderson <mcanders1@gmail.com>
2023-06-18 21:05:55 +00:00
|
|
|
get_lut_bindings(&images, &tonemapping_luts, tonemapping, [15, 16]);
|
2023-02-19 20:38:13 +00:00
|
|
|
entries.extend_from_slice(&tonemapping_luts);
|
|
|
|
|
2023-03-02 02:23:06 +00:00
|
|
|
// When using WebGL, we can't have a depth texture with multisampling
|
Webgpu support (#8336)
# Objective
- Support WebGPU
- alternative to #5027 that doesn't need any async / await
- fixes #8315
- Surprise fix #7318
## Solution
### For async renderer initialisation
- Update the plugin lifecycle:
- app builds the plugin
- calls `plugin.build`
- registers the plugin
- app starts the event loop
- event loop waits for `ready` of all registered plugins in the same
order
- returns `true` by default
- then call all `finish` then all `cleanup` in the same order as
registered
- then execute the schedule
In the case of the renderer, to avoid anything async:
- building the renderer plugin creates a detached task that will send
back the initialised renderer through a mutex in a resource
- `ready` will wait for the renderer to be present in the resource
- `finish` will take that renderer and place it in the expected
resources by other plugins
- other plugins (that expect the renderer to be available) `finish` are
called and they are able to set up their pipelines
- `cleanup` is called, only custom one is still for pipeline rendering
### For WebGPU support
- update the `build-wasm-example` script to support passing `--api
webgpu` that will build the example with WebGPU support
- feature for webgl2 was always enabled when building for wasm. it's now
in the default feature list and enabled on all platforms, so check for
this feature must also check that the target_arch is `wasm32`
---
## Migration Guide
- `Plugin::setup` has been renamed `Plugin::cleanup`
- `Plugin::finish` has been added, and plugins adding pipelines should
do it in this function instead of `Plugin::build`
```rust
// Before
impl Plugin for MyPlugin {
fn build(&self, app: &mut App) {
app.insert_resource::<MyResource>
.add_systems(Update, my_system);
let render_app = match app.get_sub_app_mut(RenderApp) {
Ok(render_app) => render_app,
Err(_) => return,
};
render_app
.init_resource::<RenderResourceNeedingDevice>()
.init_resource::<OtherRenderResource>();
}
}
// After
impl Plugin for MyPlugin {
fn build(&self, app: &mut App) {
app.insert_resource::<MyResource>
.add_systems(Update, my_system);
let render_app = match app.get_sub_app_mut(RenderApp) {
Ok(render_app) => render_app,
Err(_) => return,
};
render_app
.init_resource::<OtherRenderResource>();
}
fn finish(&self, app: &mut App) {
let render_app = match app.get_sub_app_mut(RenderApp) {
Ok(render_app) => render_app,
Err(_) => return,
};
render_app
.init_resource::<RenderResourceNeedingDevice>();
}
}
```
2023-05-04 22:07:57 +00:00
|
|
|
if cfg!(any(not(feature = "webgl"), not(target_arch = "wasm32")))
|
|
|
|
|| (cfg!(all(feature = "webgl", target_arch = "wasm32")) && msaa.samples() == 1)
|
|
|
|
{
|
2023-03-02 02:23:06 +00:00
|
|
|
entries.extend_from_slice(&prepass::get_bindings(
|
|
|
|
prepass_textures,
|
|
|
|
&mut fallback_images,
|
|
|
|
&mut fallback_depths,
|
|
|
|
&msaa,
|
Screen Space Ambient Occlusion (SSAO) MVP (#7402)
![image](https://github.com/bevyengine/bevy/assets/47158642/dbb62645-f639-4f2b-b84b-26fd915c186d)
# Objective
- Add Screen space ambient occlusion (SSAO). SSAO approximates
small-scale, local occlusion of _indirect_ diffuse light between
objects. SSAO does not apply to direct lighting, such as point or
directional lights.
- This darkens creases, e.g. on staircases, and gives nice contact
shadows where objects meet, giving entities a more "grounded" feel.
- Closes https://github.com/bevyengine/bevy/issues/3632.
## Solution
- Implement the GTAO algorithm.
-
https://www.activision.com/cdn/research/Practical_Real_Time_Strategies_for_Accurate_Indirect_Occlusion_NEW%20VERSION_COLOR.pdf
-
https://blog.selfshadow.com/publications/s2016-shading-course/activision/s2016_pbs_activision_occlusion.pdf
- Source code heavily based on [Intel's
XeGTAO](https://github.com/GameTechDev/XeGTAO/blob/0d177ce06bfa642f64d8af4de1197ad1bcb862d4/Source/Rendering/Shaders/XeGTAO.hlsli).
- Add an SSAO bevy example.
## Algorithm Overview
* Run a depth and normal prepass
* Create downscaled mips of the depth texture (preprocess_depths pass)
* GTAO pass - for each pixel, take several random samples from the
depth+normal buffers, reconstruct world position, raytrace in screen
space to estimate occlusion. Rather then doing completely random samples
on a hemisphere, you choose random _slices_ of the hemisphere, and then
can analytically compute the full occlusion of that slice. Also compute
edges based on depth differences here.
* Spatial denoise pass - bilateral blur, using edge detection to not
blur over edges. This is the final SSAO result.
* Main pass - if SSAO exists, sample the SSAO texture, and set occlusion
to be the minimum of ssao/material occlusion. This then feeds into the
rest of the PBR shader as normal.
---
## Future Improvements
- Maybe remove the low quality preset for now (too noisy)
- WebGPU fallback (see below)
- Faster depth->world position (see reverted code)
- Bent normals
- Try interleaved gradient noise or spatiotemporal blue noise
- Replace the spatial denoiser with a combined spatial+temporal denoiser
- Render at half resolution and use a bilateral upsample
- Better multibounce approximation
(https://drive.google.com/file/d/1SyagcEVplIm2KkRD3WQYSO9O0Iyi1hfy/view)
## Far-Future Performance Improvements
- F16 math (missing naga-wgsl support
https://github.com/gfx-rs/naga/issues/1884)
- Faster coordinate space conversion for normals
- Faster depth mipchain creation
(https://github.com/GPUOpen-Effects/FidelityFX-SPD) (wgpu/naga does not
currently support subgroup ops)
- Deinterleaved SSAO for better cache efficiency
(https://developer.nvidia.com/sites/default/files/akamai/gameworks/samples/DeinterleavedTexturing.pdf)
## Other Interesting Papers
- Visibility bitmask
(https://link.springer.com/article/10.1007/s00371-022-02703-y,
https://cdrinmatane.github.io/posts/cgspotlight-slides/)
- Screen space diffuse lighting
(https://github.com/Patapom/GodComplex/blob/master/Tests/TestHBIL/2018%20Mayaux%20-%20Horizon-Based%20Indirect%20Lighting%20(HBIL).pdf)
## Platform Support
* SSAO currently does not work on DirectX12 due to issues with wgpu and
naga:
* https://github.com/gfx-rs/wgpu/pull/3798
* https://github.com/gfx-rs/naga/pull/2353
* SSAO currently does not work on WebGPU because r16float is not a valid
storage texture format
https://gpuweb.github.io/gpuweb/wgsl/#storage-texel-formats. We can fix
this with a fallback to r32float.
---
## Changelog
- Added ScreenSpaceAmbientOcclusionSettings,
ScreenSpaceAmbientOcclusionQualityLevel, and
ScreenSpaceAmbientOcclusionBundle
---------
Co-authored-by: IceSentry <c.giguere42@gmail.com>
Co-authored-by: IceSentry <IceSentry@users.noreply.github.com>
Co-authored-by: Daniel Chia <danstryder@gmail.com>
Co-authored-by: Elabajaba <Elabajaba@users.noreply.github.com>
Co-authored-by: Robert Swain <robert.swain@gmail.com>
Co-authored-by: robtfm <50659922+robtfm@users.noreply.github.com>
Co-authored-by: Brandon Dyer <brandondyer64@gmail.com>
Co-authored-by: Edgar Geier <geieredgar@gmail.com>
Co-authored-by: Nicola Papale <nicopap@users.noreply.github.com>
Co-authored-by: Carter Anderson <mcanders1@gmail.com>
2023-06-18 21:05:55 +00:00
|
|
|
[17, 18, 19],
|
2023-03-02 02:23:06 +00:00
|
|
|
));
|
2023-01-19 22:11:13 +00:00
|
|
|
}
|
|
|
|
|
Shader Imports. Decouple Mesh logic from PBR (#3137)
## Shader Imports
This adds "whole file" shader imports. These come in two flavors:
### Asset Path Imports
```rust
// /assets/shaders/custom.wgsl
#import "shaders/custom_material.wgsl"
[[stage(fragment)]]
fn fragment() -> [[location(0)]] vec4<f32> {
return get_color();
}
```
```rust
// /assets/shaders/custom_material.wgsl
[[block]]
struct CustomMaterial {
color: vec4<f32>;
};
[[group(1), binding(0)]]
var<uniform> material: CustomMaterial;
```
### Custom Path Imports
Enables defining custom import paths. These are intended to be used by crates to export shader functionality:
```rust
// bevy_pbr2/src/render/pbr.wgsl
#import bevy_pbr::mesh_view_bind_group
#import bevy_pbr::mesh_bind_group
[[block]]
struct StandardMaterial {
base_color: vec4<f32>;
emissive: vec4<f32>;
perceptual_roughness: f32;
metallic: f32;
reflectance: f32;
flags: u32;
};
/* rest of PBR fragment shader here */
```
```rust
impl Plugin for MeshRenderPlugin {
fn build(&self, app: &mut bevy_app::App) {
let mut shaders = app.world.get_resource_mut::<Assets<Shader>>().unwrap();
shaders.set_untracked(
MESH_BIND_GROUP_HANDLE,
Shader::from_wgsl(include_str!("mesh_bind_group.wgsl"))
.with_import_path("bevy_pbr::mesh_bind_group"),
);
shaders.set_untracked(
MESH_VIEW_BIND_GROUP_HANDLE,
Shader::from_wgsl(include_str!("mesh_view_bind_group.wgsl"))
.with_import_path("bevy_pbr::mesh_view_bind_group"),
);
```
By convention these should use rust-style module paths that start with the crate name. Ultimately we might enforce this convention.
Note that this feature implements _run time_ import resolution. Ultimately we should move the import logic into an asset preprocessor once Bevy gets support for that.
## Decouple Mesh Logic from PBR Logic via MeshRenderPlugin
This breaks out mesh rendering code from PBR material code, which improves the legibility of the code, decouples mesh logic from PBR logic, and opens the door for a future `MaterialPlugin<T: Material>` that handles all of the pipeline setup for arbitrary shader materials.
## Removed `RenderAsset<Shader>` in favor of extracting shaders into RenderPipelineCache
This simplifies the shader import implementation and removes the need to pass around `RenderAssets<Shader>`.
## RenderCommands are now fallible
This allows us to cleanly handle pipelines+shaders not being ready yet. We can abort a render command early in these cases, preventing bevy from trying to bind group / do draw calls for pipelines that couldn't be bound. This could also be used in the future for things like "components not existing on entities yet".
# Next Steps
* Investigate using Naga for "partial typed imports" (ex: `#import bevy_pbr::material::StandardMaterial`, which would import only the StandardMaterial struct)
* Implement `MaterialPlugin<T: Material>` for low-boilerplate custom material shaders
* Move shader import logic into the asset preprocessor once bevy gets support for that.
Fixes #3132
2021-11-18 03:45:02 +00:00
|
|
|
let view_bind_group = render_device.create_bind_group(&BindGroupDescriptor {
|
2023-01-19 22:11:13 +00:00
|
|
|
entries: &entries,
|
Shader Imports. Decouple Mesh logic from PBR (#3137)
## Shader Imports
This adds "whole file" shader imports. These come in two flavors:
### Asset Path Imports
```rust
// /assets/shaders/custom.wgsl
#import "shaders/custom_material.wgsl"
[[stage(fragment)]]
fn fragment() -> [[location(0)]] vec4<f32> {
return get_color();
}
```
```rust
// /assets/shaders/custom_material.wgsl
[[block]]
struct CustomMaterial {
color: vec4<f32>;
};
[[group(1), binding(0)]]
var<uniform> material: CustomMaterial;
```
### Custom Path Imports
Enables defining custom import paths. These are intended to be used by crates to export shader functionality:
```rust
// bevy_pbr2/src/render/pbr.wgsl
#import bevy_pbr::mesh_view_bind_group
#import bevy_pbr::mesh_bind_group
[[block]]
struct StandardMaterial {
base_color: vec4<f32>;
emissive: vec4<f32>;
perceptual_roughness: f32;
metallic: f32;
reflectance: f32;
flags: u32;
};
/* rest of PBR fragment shader here */
```
```rust
impl Plugin for MeshRenderPlugin {
fn build(&self, app: &mut bevy_app::App) {
let mut shaders = app.world.get_resource_mut::<Assets<Shader>>().unwrap();
shaders.set_untracked(
MESH_BIND_GROUP_HANDLE,
Shader::from_wgsl(include_str!("mesh_bind_group.wgsl"))
.with_import_path("bevy_pbr::mesh_bind_group"),
);
shaders.set_untracked(
MESH_VIEW_BIND_GROUP_HANDLE,
Shader::from_wgsl(include_str!("mesh_view_bind_group.wgsl"))
.with_import_path("bevy_pbr::mesh_view_bind_group"),
);
```
By convention these should use rust-style module paths that start with the crate name. Ultimately we might enforce this convention.
Note that this feature implements _run time_ import resolution. Ultimately we should move the import logic into an asset preprocessor once Bevy gets support for that.
## Decouple Mesh Logic from PBR Logic via MeshRenderPlugin
This breaks out mesh rendering code from PBR material code, which improves the legibility of the code, decouples mesh logic from PBR logic, and opens the door for a future `MaterialPlugin<T: Material>` that handles all of the pipeline setup for arbitrary shader materials.
## Removed `RenderAsset<Shader>` in favor of extracting shaders into RenderPipelineCache
This simplifies the shader import implementation and removes the need to pass around `RenderAssets<Shader>`.
## RenderCommands are now fallible
This allows us to cleanly handle pipelines+shaders not being ready yet. We can abort a render command early in these cases, preventing bevy from trying to bind group / do draw calls for pipelines that couldn't be bound. This could also be used in the future for things like "components not existing on entities yet".
# Next Steps
* Investigate using Naga for "partial typed imports" (ex: `#import bevy_pbr::material::StandardMaterial`, which would import only the StandardMaterial struct)
* Implement `MaterialPlugin<T: Material>` for low-boilerplate custom material shaders
* Move shader import logic into the asset preprocessor once bevy gets support for that.
Fixes #3132
2021-11-18 03:45:02 +00:00
|
|
|
label: Some("mesh_view_bind_group"),
|
2023-01-19 22:11:13 +00:00
|
|
|
layout,
|
Shader Imports. Decouple Mesh logic from PBR (#3137)
## Shader Imports
This adds "whole file" shader imports. These come in two flavors:
### Asset Path Imports
```rust
// /assets/shaders/custom.wgsl
#import "shaders/custom_material.wgsl"
[[stage(fragment)]]
fn fragment() -> [[location(0)]] vec4<f32> {
return get_color();
}
```
```rust
// /assets/shaders/custom_material.wgsl
[[block]]
struct CustomMaterial {
color: vec4<f32>;
};
[[group(1), binding(0)]]
var<uniform> material: CustomMaterial;
```
### Custom Path Imports
Enables defining custom import paths. These are intended to be used by crates to export shader functionality:
```rust
// bevy_pbr2/src/render/pbr.wgsl
#import bevy_pbr::mesh_view_bind_group
#import bevy_pbr::mesh_bind_group
[[block]]
struct StandardMaterial {
base_color: vec4<f32>;
emissive: vec4<f32>;
perceptual_roughness: f32;
metallic: f32;
reflectance: f32;
flags: u32;
};
/* rest of PBR fragment shader here */
```
```rust
impl Plugin for MeshRenderPlugin {
fn build(&self, app: &mut bevy_app::App) {
let mut shaders = app.world.get_resource_mut::<Assets<Shader>>().unwrap();
shaders.set_untracked(
MESH_BIND_GROUP_HANDLE,
Shader::from_wgsl(include_str!("mesh_bind_group.wgsl"))
.with_import_path("bevy_pbr::mesh_bind_group"),
);
shaders.set_untracked(
MESH_VIEW_BIND_GROUP_HANDLE,
Shader::from_wgsl(include_str!("mesh_view_bind_group.wgsl"))
.with_import_path("bevy_pbr::mesh_view_bind_group"),
);
```
By convention these should use rust-style module paths that start with the crate name. Ultimately we might enforce this convention.
Note that this feature implements _run time_ import resolution. Ultimately we should move the import logic into an asset preprocessor once Bevy gets support for that.
## Decouple Mesh Logic from PBR Logic via MeshRenderPlugin
This breaks out mesh rendering code from PBR material code, which improves the legibility of the code, decouples mesh logic from PBR logic, and opens the door for a future `MaterialPlugin<T: Material>` that handles all of the pipeline setup for arbitrary shader materials.
## Removed `RenderAsset<Shader>` in favor of extracting shaders into RenderPipelineCache
This simplifies the shader import implementation and removes the need to pass around `RenderAssets<Shader>`.
## RenderCommands are now fallible
This allows us to cleanly handle pipelines+shaders not being ready yet. We can abort a render command early in these cases, preventing bevy from trying to bind group / do draw calls for pipelines that couldn't be bound. This could also be used in the future for things like "components not existing on entities yet".
# Next Steps
* Investigate using Naga for "partial typed imports" (ex: `#import bevy_pbr::material::StandardMaterial`, which would import only the StandardMaterial struct)
* Implement `MaterialPlugin<T: Material>` for low-boilerplate custom material shaders
* Move shader import logic into the asset preprocessor once bevy gets support for that.
Fixes #3132
2021-11-18 03:45:02 +00:00
|
|
|
});
|
|
|
|
|
|
|
|
commands.entity(entity).insert(MeshViewBindGroup {
|
|
|
|
value: view_bind_group,
|
|
|
|
});
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
pub struct SetMeshViewBindGroup<const I: usize>;
|
2023-01-04 01:13:30 +00:00
|
|
|
impl<P: PhaseItem, const I: usize> RenderCommand<P> for SetMeshViewBindGroup<I> {
|
|
|
|
type Param = ();
|
|
|
|
type ViewWorldQuery = (
|
Shader Imports. Decouple Mesh logic from PBR (#3137)
## Shader Imports
This adds "whole file" shader imports. These come in two flavors:
### Asset Path Imports
```rust
// /assets/shaders/custom.wgsl
#import "shaders/custom_material.wgsl"
[[stage(fragment)]]
fn fragment() -> [[location(0)]] vec4<f32> {
return get_color();
}
```
```rust
// /assets/shaders/custom_material.wgsl
[[block]]
struct CustomMaterial {
color: vec4<f32>;
};
[[group(1), binding(0)]]
var<uniform> material: CustomMaterial;
```
### Custom Path Imports
Enables defining custom import paths. These are intended to be used by crates to export shader functionality:
```rust
// bevy_pbr2/src/render/pbr.wgsl
#import bevy_pbr::mesh_view_bind_group
#import bevy_pbr::mesh_bind_group
[[block]]
struct StandardMaterial {
base_color: vec4<f32>;
emissive: vec4<f32>;
perceptual_roughness: f32;
metallic: f32;
reflectance: f32;
flags: u32;
};
/* rest of PBR fragment shader here */
```
```rust
impl Plugin for MeshRenderPlugin {
fn build(&self, app: &mut bevy_app::App) {
let mut shaders = app.world.get_resource_mut::<Assets<Shader>>().unwrap();
shaders.set_untracked(
MESH_BIND_GROUP_HANDLE,
Shader::from_wgsl(include_str!("mesh_bind_group.wgsl"))
.with_import_path("bevy_pbr::mesh_bind_group"),
);
shaders.set_untracked(
MESH_VIEW_BIND_GROUP_HANDLE,
Shader::from_wgsl(include_str!("mesh_view_bind_group.wgsl"))
.with_import_path("bevy_pbr::mesh_view_bind_group"),
);
```
By convention these should use rust-style module paths that start with the crate name. Ultimately we might enforce this convention.
Note that this feature implements _run time_ import resolution. Ultimately we should move the import logic into an asset preprocessor once Bevy gets support for that.
## Decouple Mesh Logic from PBR Logic via MeshRenderPlugin
This breaks out mesh rendering code from PBR material code, which improves the legibility of the code, decouples mesh logic from PBR logic, and opens the door for a future `MaterialPlugin<T: Material>` that handles all of the pipeline setup for arbitrary shader materials.
## Removed `RenderAsset<Shader>` in favor of extracting shaders into RenderPipelineCache
This simplifies the shader import implementation and removes the need to pass around `RenderAssets<Shader>`.
## RenderCommands are now fallible
This allows us to cleanly handle pipelines+shaders not being ready yet. We can abort a render command early in these cases, preventing bevy from trying to bind group / do draw calls for pipelines that couldn't be bound. This could also be used in the future for things like "components not existing on entities yet".
# Next Steps
* Investigate using Naga for "partial typed imports" (ex: `#import bevy_pbr::material::StandardMaterial`, which would import only the StandardMaterial struct)
* Implement `MaterialPlugin<T: Material>` for low-boilerplate custom material shaders
* Move shader import logic into the asset preprocessor once bevy gets support for that.
Fixes #3132
2021-11-18 03:45:02 +00:00
|
|
|
Read<ViewUniformOffset>,
|
2021-11-19 21:16:58 +00:00
|
|
|
Read<ViewLightsUniformOffset>,
|
Add Distance and Atmospheric Fog support (#6412)
<img width="1392" alt="image" src="https://user-images.githubusercontent.com/418473/203873533-44c029af-13b7-4740-8ea3-af96bd5867c9.png">
<img width="1392" alt="image" src="https://user-images.githubusercontent.com/418473/203873549-36be7a23-b341-42a2-8a9f-ceea8ac7a2b8.png">
# Objective
- Add support for the “classic” distance fog effect, as well as a more advanced atmospheric fog effect.
## Solution
This PR:
- Introduces a new `FogSettings` component that controls distance fog per-camera.
- Adds support for three widely used “traditional” fog falloff modes: `Linear`, `Exponential` and `ExponentialSquared`, as well as a more advanced `Atmospheric` fog;
- Adds support for directional light influence over fog color;
- Extracts fog via `ExtractComponent`, then uses a prepare system that sets up a new dynamic uniform struct (`Fog`), similar to other mesh view types;
- Renders fog in PBR material shader, as a final adjustment to the `output_color`, after PBR is computed (but before tone mapping);
- Adds a new `StandardMaterial` flag to enable fog; (`fog_enabled`)
- Adds convenience methods for easier artistic control when creating non-linear fog types;
- Adds documentation around fog.
---
## Changelog
### Added
- Added support for distance-based fog effects for PBR materials, controllable per-camera via the new `FogSettings` component;
- Added `FogFalloff` enum for selecting between three widely used “traditional” fog falloff modes: `Linear`, `Exponential` and `ExponentialSquared`, as well as a more advanced `Atmospheric` fog;
2023-01-29 15:28:56 +00:00
|
|
|
Read<ViewFogUniformOffset>,
|
Shader Imports. Decouple Mesh logic from PBR (#3137)
## Shader Imports
This adds "whole file" shader imports. These come in two flavors:
### Asset Path Imports
```rust
// /assets/shaders/custom.wgsl
#import "shaders/custom_material.wgsl"
[[stage(fragment)]]
fn fragment() -> [[location(0)]] vec4<f32> {
return get_color();
}
```
```rust
// /assets/shaders/custom_material.wgsl
[[block]]
struct CustomMaterial {
color: vec4<f32>;
};
[[group(1), binding(0)]]
var<uniform> material: CustomMaterial;
```
### Custom Path Imports
Enables defining custom import paths. These are intended to be used by crates to export shader functionality:
```rust
// bevy_pbr2/src/render/pbr.wgsl
#import bevy_pbr::mesh_view_bind_group
#import bevy_pbr::mesh_bind_group
[[block]]
struct StandardMaterial {
base_color: vec4<f32>;
emissive: vec4<f32>;
perceptual_roughness: f32;
metallic: f32;
reflectance: f32;
flags: u32;
};
/* rest of PBR fragment shader here */
```
```rust
impl Plugin for MeshRenderPlugin {
fn build(&self, app: &mut bevy_app::App) {
let mut shaders = app.world.get_resource_mut::<Assets<Shader>>().unwrap();
shaders.set_untracked(
MESH_BIND_GROUP_HANDLE,
Shader::from_wgsl(include_str!("mesh_bind_group.wgsl"))
.with_import_path("bevy_pbr::mesh_bind_group"),
);
shaders.set_untracked(
MESH_VIEW_BIND_GROUP_HANDLE,
Shader::from_wgsl(include_str!("mesh_view_bind_group.wgsl"))
.with_import_path("bevy_pbr::mesh_view_bind_group"),
);
```
By convention these should use rust-style module paths that start with the crate name. Ultimately we might enforce this convention.
Note that this feature implements _run time_ import resolution. Ultimately we should move the import logic into an asset preprocessor once Bevy gets support for that.
## Decouple Mesh Logic from PBR Logic via MeshRenderPlugin
This breaks out mesh rendering code from PBR material code, which improves the legibility of the code, decouples mesh logic from PBR logic, and opens the door for a future `MaterialPlugin<T: Material>` that handles all of the pipeline setup for arbitrary shader materials.
## Removed `RenderAsset<Shader>` in favor of extracting shaders into RenderPipelineCache
This simplifies the shader import implementation and removes the need to pass around `RenderAssets<Shader>`.
## RenderCommands are now fallible
This allows us to cleanly handle pipelines+shaders not being ready yet. We can abort a render command early in these cases, preventing bevy from trying to bind group / do draw calls for pipelines that couldn't be bound. This could also be used in the future for things like "components not existing on entities yet".
# Next Steps
* Investigate using Naga for "partial typed imports" (ex: `#import bevy_pbr::material::StandardMaterial`, which would import only the StandardMaterial struct)
* Implement `MaterialPlugin<T: Material>` for low-boilerplate custom material shaders
* Move shader import logic into the asset preprocessor once bevy gets support for that.
Fixes #3132
2021-11-18 03:45:02 +00:00
|
|
|
Read<MeshViewBindGroup>,
|
2023-01-04 01:13:30 +00:00
|
|
|
);
|
|
|
|
type ItemWorldQuery = ();
|
|
|
|
|
Shader Imports. Decouple Mesh logic from PBR (#3137)
## Shader Imports
This adds "whole file" shader imports. These come in two flavors:
### Asset Path Imports
```rust
// /assets/shaders/custom.wgsl
#import "shaders/custom_material.wgsl"
[[stage(fragment)]]
fn fragment() -> [[location(0)]] vec4<f32> {
return get_color();
}
```
```rust
// /assets/shaders/custom_material.wgsl
[[block]]
struct CustomMaterial {
color: vec4<f32>;
};
[[group(1), binding(0)]]
var<uniform> material: CustomMaterial;
```
### Custom Path Imports
Enables defining custom import paths. These are intended to be used by crates to export shader functionality:
```rust
// bevy_pbr2/src/render/pbr.wgsl
#import bevy_pbr::mesh_view_bind_group
#import bevy_pbr::mesh_bind_group
[[block]]
struct StandardMaterial {
base_color: vec4<f32>;
emissive: vec4<f32>;
perceptual_roughness: f32;
metallic: f32;
reflectance: f32;
flags: u32;
};
/* rest of PBR fragment shader here */
```
```rust
impl Plugin for MeshRenderPlugin {
fn build(&self, app: &mut bevy_app::App) {
let mut shaders = app.world.get_resource_mut::<Assets<Shader>>().unwrap();
shaders.set_untracked(
MESH_BIND_GROUP_HANDLE,
Shader::from_wgsl(include_str!("mesh_bind_group.wgsl"))
.with_import_path("bevy_pbr::mesh_bind_group"),
);
shaders.set_untracked(
MESH_VIEW_BIND_GROUP_HANDLE,
Shader::from_wgsl(include_str!("mesh_view_bind_group.wgsl"))
.with_import_path("bevy_pbr::mesh_view_bind_group"),
);
```
By convention these should use rust-style module paths that start with the crate name. Ultimately we might enforce this convention.
Note that this feature implements _run time_ import resolution. Ultimately we should move the import logic into an asset preprocessor once Bevy gets support for that.
## Decouple Mesh Logic from PBR Logic via MeshRenderPlugin
This breaks out mesh rendering code from PBR material code, which improves the legibility of the code, decouples mesh logic from PBR logic, and opens the door for a future `MaterialPlugin<T: Material>` that handles all of the pipeline setup for arbitrary shader materials.
## Removed `RenderAsset<Shader>` in favor of extracting shaders into RenderPipelineCache
This simplifies the shader import implementation and removes the need to pass around `RenderAssets<Shader>`.
## RenderCommands are now fallible
This allows us to cleanly handle pipelines+shaders not being ready yet. We can abort a render command early in these cases, preventing bevy from trying to bind group / do draw calls for pipelines that couldn't be bound. This could also be used in the future for things like "components not existing on entities yet".
# Next Steps
* Investigate using Naga for "partial typed imports" (ex: `#import bevy_pbr::material::StandardMaterial`, which would import only the StandardMaterial struct)
* Implement `MaterialPlugin<T: Material>` for low-boilerplate custom material shaders
* Move shader import logic into the asset preprocessor once bevy gets support for that.
Fixes #3132
2021-11-18 03:45:02 +00:00
|
|
|
#[inline]
|
|
|
|
fn render<'w>(
|
2023-01-04 01:13:30 +00:00
|
|
|
_item: &P,
|
Add Distance and Atmospheric Fog support (#6412)
<img width="1392" alt="image" src="https://user-images.githubusercontent.com/418473/203873533-44c029af-13b7-4740-8ea3-af96bd5867c9.png">
<img width="1392" alt="image" src="https://user-images.githubusercontent.com/418473/203873549-36be7a23-b341-42a2-8a9f-ceea8ac7a2b8.png">
# Objective
- Add support for the “classic” distance fog effect, as well as a more advanced atmospheric fog effect.
## Solution
This PR:
- Introduces a new `FogSettings` component that controls distance fog per-camera.
- Adds support for three widely used “traditional” fog falloff modes: `Linear`, `Exponential` and `ExponentialSquared`, as well as a more advanced `Atmospheric` fog;
- Adds support for directional light influence over fog color;
- Extracts fog via `ExtractComponent`, then uses a prepare system that sets up a new dynamic uniform struct (`Fog`), similar to other mesh view types;
- Renders fog in PBR material shader, as a final adjustment to the `output_color`, after PBR is computed (but before tone mapping);
- Adds a new `StandardMaterial` flag to enable fog; (`fog_enabled`)
- Adds convenience methods for easier artistic control when creating non-linear fog types;
- Adds documentation around fog.
---
## Changelog
### Added
- Added support for distance-based fog effects for PBR materials, controllable per-camera via the new `FogSettings` component;
- Added `FogFalloff` enum for selecting between three widely used “traditional” fog falloff modes: `Linear`, `Exponential` and `ExponentialSquared`, as well as a more advanced `Atmospheric` fog;
2023-01-29 15:28:56 +00:00
|
|
|
(view_uniform, view_lights, view_fog, mesh_view_bind_group): ROQueryItem<
|
|
|
|
'w,
|
|
|
|
Self::ViewWorldQuery,
|
|
|
|
>,
|
2023-01-04 01:13:30 +00:00
|
|
|
_entity: (),
|
|
|
|
_: SystemParamItem<'w, '_, Self::Param>,
|
Shader Imports. Decouple Mesh logic from PBR (#3137)
## Shader Imports
This adds "whole file" shader imports. These come in two flavors:
### Asset Path Imports
```rust
// /assets/shaders/custom.wgsl
#import "shaders/custom_material.wgsl"
[[stage(fragment)]]
fn fragment() -> [[location(0)]] vec4<f32> {
return get_color();
}
```
```rust
// /assets/shaders/custom_material.wgsl
[[block]]
struct CustomMaterial {
color: vec4<f32>;
};
[[group(1), binding(0)]]
var<uniform> material: CustomMaterial;
```
### Custom Path Imports
Enables defining custom import paths. These are intended to be used by crates to export shader functionality:
```rust
// bevy_pbr2/src/render/pbr.wgsl
#import bevy_pbr::mesh_view_bind_group
#import bevy_pbr::mesh_bind_group
[[block]]
struct StandardMaterial {
base_color: vec4<f32>;
emissive: vec4<f32>;
perceptual_roughness: f32;
metallic: f32;
reflectance: f32;
flags: u32;
};
/* rest of PBR fragment shader here */
```
```rust
impl Plugin for MeshRenderPlugin {
fn build(&self, app: &mut bevy_app::App) {
let mut shaders = app.world.get_resource_mut::<Assets<Shader>>().unwrap();
shaders.set_untracked(
MESH_BIND_GROUP_HANDLE,
Shader::from_wgsl(include_str!("mesh_bind_group.wgsl"))
.with_import_path("bevy_pbr::mesh_bind_group"),
);
shaders.set_untracked(
MESH_VIEW_BIND_GROUP_HANDLE,
Shader::from_wgsl(include_str!("mesh_view_bind_group.wgsl"))
.with_import_path("bevy_pbr::mesh_view_bind_group"),
);
```
By convention these should use rust-style module paths that start with the crate name. Ultimately we might enforce this convention.
Note that this feature implements _run time_ import resolution. Ultimately we should move the import logic into an asset preprocessor once Bevy gets support for that.
## Decouple Mesh Logic from PBR Logic via MeshRenderPlugin
This breaks out mesh rendering code from PBR material code, which improves the legibility of the code, decouples mesh logic from PBR logic, and opens the door for a future `MaterialPlugin<T: Material>` that handles all of the pipeline setup for arbitrary shader materials.
## Removed `RenderAsset<Shader>` in favor of extracting shaders into RenderPipelineCache
This simplifies the shader import implementation and removes the need to pass around `RenderAssets<Shader>`.
## RenderCommands are now fallible
This allows us to cleanly handle pipelines+shaders not being ready yet. We can abort a render command early in these cases, preventing bevy from trying to bind group / do draw calls for pipelines that couldn't be bound. This could also be used in the future for things like "components not existing on entities yet".
# Next Steps
* Investigate using Naga for "partial typed imports" (ex: `#import bevy_pbr::material::StandardMaterial`, which would import only the StandardMaterial struct)
* Implement `MaterialPlugin<T: Material>` for low-boilerplate custom material shaders
* Move shader import logic into the asset preprocessor once bevy gets support for that.
Fixes #3132
2021-11-18 03:45:02 +00:00
|
|
|
pass: &mut TrackedRenderPass<'w>,
|
|
|
|
) -> RenderCommandResult {
|
|
|
|
pass.set_bind_group(
|
|
|
|
I,
|
|
|
|
&mesh_view_bind_group.value,
|
Add Distance and Atmospheric Fog support (#6412)
<img width="1392" alt="image" src="https://user-images.githubusercontent.com/418473/203873533-44c029af-13b7-4740-8ea3-af96bd5867c9.png">
<img width="1392" alt="image" src="https://user-images.githubusercontent.com/418473/203873549-36be7a23-b341-42a2-8a9f-ceea8ac7a2b8.png">
# Objective
- Add support for the “classic” distance fog effect, as well as a more advanced atmospheric fog effect.
## Solution
This PR:
- Introduces a new `FogSettings` component that controls distance fog per-camera.
- Adds support for three widely used “traditional” fog falloff modes: `Linear`, `Exponential` and `ExponentialSquared`, as well as a more advanced `Atmospheric` fog;
- Adds support for directional light influence over fog color;
- Extracts fog via `ExtractComponent`, then uses a prepare system that sets up a new dynamic uniform struct (`Fog`), similar to other mesh view types;
- Renders fog in PBR material shader, as a final adjustment to the `output_color`, after PBR is computed (but before tone mapping);
- Adds a new `StandardMaterial` flag to enable fog; (`fog_enabled`)
- Adds convenience methods for easier artistic control when creating non-linear fog types;
- Adds documentation around fog.
---
## Changelog
### Added
- Added support for distance-based fog effects for PBR materials, controllable per-camera via the new `FogSettings` component;
- Added `FogFalloff` enum for selecting between three widely used “traditional” fog falloff modes: `Linear`, `Exponential` and `ExponentialSquared`, as well as a more advanced `Atmospheric` fog;
2023-01-29 15:28:56 +00:00
|
|
|
&[view_uniform.offset, view_lights.offset, view_fog.offset],
|
Shader Imports. Decouple Mesh logic from PBR (#3137)
## Shader Imports
This adds "whole file" shader imports. These come in two flavors:
### Asset Path Imports
```rust
// /assets/shaders/custom.wgsl
#import "shaders/custom_material.wgsl"
[[stage(fragment)]]
fn fragment() -> [[location(0)]] vec4<f32> {
return get_color();
}
```
```rust
// /assets/shaders/custom_material.wgsl
[[block]]
struct CustomMaterial {
color: vec4<f32>;
};
[[group(1), binding(0)]]
var<uniform> material: CustomMaterial;
```
### Custom Path Imports
Enables defining custom import paths. These are intended to be used by crates to export shader functionality:
```rust
// bevy_pbr2/src/render/pbr.wgsl
#import bevy_pbr::mesh_view_bind_group
#import bevy_pbr::mesh_bind_group
[[block]]
struct StandardMaterial {
base_color: vec4<f32>;
emissive: vec4<f32>;
perceptual_roughness: f32;
metallic: f32;
reflectance: f32;
flags: u32;
};
/* rest of PBR fragment shader here */
```
```rust
impl Plugin for MeshRenderPlugin {
fn build(&self, app: &mut bevy_app::App) {
let mut shaders = app.world.get_resource_mut::<Assets<Shader>>().unwrap();
shaders.set_untracked(
MESH_BIND_GROUP_HANDLE,
Shader::from_wgsl(include_str!("mesh_bind_group.wgsl"))
.with_import_path("bevy_pbr::mesh_bind_group"),
);
shaders.set_untracked(
MESH_VIEW_BIND_GROUP_HANDLE,
Shader::from_wgsl(include_str!("mesh_view_bind_group.wgsl"))
.with_import_path("bevy_pbr::mesh_view_bind_group"),
);
```
By convention these should use rust-style module paths that start with the crate name. Ultimately we might enforce this convention.
Note that this feature implements _run time_ import resolution. Ultimately we should move the import logic into an asset preprocessor once Bevy gets support for that.
## Decouple Mesh Logic from PBR Logic via MeshRenderPlugin
This breaks out mesh rendering code from PBR material code, which improves the legibility of the code, decouples mesh logic from PBR logic, and opens the door for a future `MaterialPlugin<T: Material>` that handles all of the pipeline setup for arbitrary shader materials.
## Removed `RenderAsset<Shader>` in favor of extracting shaders into RenderPipelineCache
This simplifies the shader import implementation and removes the need to pass around `RenderAssets<Shader>`.
## RenderCommands are now fallible
This allows us to cleanly handle pipelines+shaders not being ready yet. We can abort a render command early in these cases, preventing bevy from trying to bind group / do draw calls for pipelines that couldn't be bound. This could also be used in the future for things like "components not existing on entities yet".
# Next Steps
* Investigate using Naga for "partial typed imports" (ex: `#import bevy_pbr::material::StandardMaterial`, which would import only the StandardMaterial struct)
* Implement `MaterialPlugin<T: Material>` for low-boilerplate custom material shaders
* Move shader import logic into the asset preprocessor once bevy gets support for that.
Fixes #3132
2021-11-18 03:45:02 +00:00
|
|
|
);
|
|
|
|
|
|
|
|
RenderCommandResult::Success
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
pub struct SetMeshBindGroup<const I: usize>;
|
2023-01-04 01:13:30 +00:00
|
|
|
impl<P: PhaseItem, const I: usize> RenderCommand<P> for SetMeshBindGroup<I> {
|
Add morph targets (#8158)
# Objective
- Add morph targets to `bevy_pbr` (closes #5756) & load them from glTF
- Supersedes #3722
- Fixes #6814
[Morph targets][1] (also known as shape interpolation, shape keys, or
blend shapes) allow animating individual vertices with fine grained
controls. This is typically used for facial expressions. By specifying
multiple poses as vertex offset, and providing a set of weight of each
pose, it is possible to define surprisingly realistic transitions
between poses. Blending between multiple poses also allow composition.
Morph targets are part of the [gltf standard][2] and are a feature of
Unity and Unreal, and babylone.js, it is only natural to implement them
in bevy.
## Solution
This implementation of morph targets uses a 3d texture where each pixel
is a component of an animated attribute. Each layer is a different
target. We use a 2d texture for each target, because the number of
attribute×components×animated vertices is expected to always exceed the
maximum pixel row size limit of webGL2. It copies fairly closely the way
skinning is implemented on the CPU side, while on the GPU side, the
shader morph target implementation is a relatively trivial detail.
We add an optional `morph_texture` to the `Mesh` struct. The
`morph_texture` is built through a method that accepts an iterator over
attribute buffers.
The `MorphWeights` component, user-accessible, controls the blend of
poses used by mesh instances (so that multiple copy of the same mesh may
have different weights), all the weights are uploaded to a uniform
buffer of 256 `f32`. We limit to 16 poses per mesh, and a total of 256
poses.
More literature:
* Old babylone.js implementation (vertex attribute-based):
https://www.eternalcoding.com/dev-log-1-morph-targets/
* Babylone.js implementation (similar to ours):
https://www.youtube.com/watch?v=LBPRmGgU0PE
* GPU gems 3:
https://developer.nvidia.com/gpugems/gpugems3/part-i-geometry/chapter-3-directx-10-blend-shapes-breaking-limits
* Development discord thread
https://discord.com/channels/691052431525675048/1083325980615114772
https://user-images.githubusercontent.com/26321040/231181046-3bca2ab2-d4d9-472e-8098-639f1871ce2e.mp4
https://github.com/bevyengine/bevy/assets/26321040/d2a0c544-0ef8-45cf-9f99-8c3792f5a258
## Acknowledgements
* Thanks to `storytold` for sponsoring the feature
* Thanks to `superdump` and `james7132` for guidance and help figuring
out stuff
## Future work
- Handling of less and more attributes (eg: animated uv, animated
arbitrary attributes)
- Dynamic pose allocation (so that zero-weighted poses aren't uploaded
to GPU for example, enables much more total poses)
- Better animation API, see #8357
----
## Changelog
- Add morph targets to bevy meshes
- Support up to 64 poses per mesh of individually up to 116508 vertices,
animation currently strictly limited to the position, normal and tangent
attributes.
- Load a morph target using `Mesh::set_morph_targets`
- Add `VisitMorphTargets` and `VisitMorphAttributes` traits to
`bevy_render`, this allows defining morph targets (a fairly complex and
nested data structure) through iterators (ie: single copy instead of
passing around buffers), see documentation of those traits for details
- Add `MorphWeights` component exported by `bevy_render`
- `MorphWeights` control mesh's morph target weights, blending between
various poses defined as morph targets.
- `MorphWeights` are directly inherited by direct children (single level
of hierarchy) of an entity. This allows controlling several mesh
primitives through a unique entity _as per GLTF spec_.
- Add `MorphTargetNames` component, naming each indices of loaded morph
targets.
- Load morph targets weights and buffers in `bevy_gltf`
- handle morph targets animations in `bevy_animation` (previously, it
was a `warn!` log)
- Add the `MorphStressTest.gltf` asset for morph targets testing, taken
from the glTF samples repo, CC0.
- Add morph target manipulation to `scene_viewer`
- Separate the animation code in `scene_viewer` from the rest of the
code, reducing `#[cfg(feature)]` noise
- Add the `morph_targets.rs` example to show off how to manipulate morph
targets, loading `MorpStressTest.gltf`
## Migration Guide
- (very specialized, unlikely to be touched by 3rd parties)
- `MeshPipeline` now has a single `mesh_layouts` field rather than
separate `mesh_layout` and `skinned_mesh_layout` fields. You should
handle all possible mesh bind group layouts in your implementation
- You should also handle properly the new `MORPH_TARGETS` shader def and
mesh pipeline key. A new function is exposed to make this easier:
`setup_moprh_and_skinning_defs`
- The `MeshBindGroup` is now `MeshBindGroups`, cached bind groups are
now accessed through the `get` method.
[1]: https://en.wikipedia.org/wiki/Morph_target_animation
[2]:
https://registry.khronos.org/glTF/specs/2.0/glTF-2.0.html#morph-targets
---------
Co-authored-by: François <mockersf@gmail.com>
Co-authored-by: Carter Anderson <mcanders1@gmail.com>
2023-06-22 20:00:01 +00:00
|
|
|
type Param = SRes<MeshBindGroups>;
|
2023-01-04 01:13:30 +00:00
|
|
|
type ViewWorldQuery = ();
|
|
|
|
type ItemWorldQuery = (
|
Add morph targets (#8158)
# Objective
- Add morph targets to `bevy_pbr` (closes #5756) & load them from glTF
- Supersedes #3722
- Fixes #6814
[Morph targets][1] (also known as shape interpolation, shape keys, or
blend shapes) allow animating individual vertices with fine grained
controls. This is typically used for facial expressions. By specifying
multiple poses as vertex offset, and providing a set of weight of each
pose, it is possible to define surprisingly realistic transitions
between poses. Blending between multiple poses also allow composition.
Morph targets are part of the [gltf standard][2] and are a feature of
Unity and Unreal, and babylone.js, it is only natural to implement them
in bevy.
## Solution
This implementation of morph targets uses a 3d texture where each pixel
is a component of an animated attribute. Each layer is a different
target. We use a 2d texture for each target, because the number of
attribute×components×animated vertices is expected to always exceed the
maximum pixel row size limit of webGL2. It copies fairly closely the way
skinning is implemented on the CPU side, while on the GPU side, the
shader morph target implementation is a relatively trivial detail.
We add an optional `morph_texture` to the `Mesh` struct. The
`morph_texture` is built through a method that accepts an iterator over
attribute buffers.
The `MorphWeights` component, user-accessible, controls the blend of
poses used by mesh instances (so that multiple copy of the same mesh may
have different weights), all the weights are uploaded to a uniform
buffer of 256 `f32`. We limit to 16 poses per mesh, and a total of 256
poses.
More literature:
* Old babylone.js implementation (vertex attribute-based):
https://www.eternalcoding.com/dev-log-1-morph-targets/
* Babylone.js implementation (similar to ours):
https://www.youtube.com/watch?v=LBPRmGgU0PE
* GPU gems 3:
https://developer.nvidia.com/gpugems/gpugems3/part-i-geometry/chapter-3-directx-10-blend-shapes-breaking-limits
* Development discord thread
https://discord.com/channels/691052431525675048/1083325980615114772
https://user-images.githubusercontent.com/26321040/231181046-3bca2ab2-d4d9-472e-8098-639f1871ce2e.mp4
https://github.com/bevyengine/bevy/assets/26321040/d2a0c544-0ef8-45cf-9f99-8c3792f5a258
## Acknowledgements
* Thanks to `storytold` for sponsoring the feature
* Thanks to `superdump` and `james7132` for guidance and help figuring
out stuff
## Future work
- Handling of less and more attributes (eg: animated uv, animated
arbitrary attributes)
- Dynamic pose allocation (so that zero-weighted poses aren't uploaded
to GPU for example, enables much more total poses)
- Better animation API, see #8357
----
## Changelog
- Add morph targets to bevy meshes
- Support up to 64 poses per mesh of individually up to 116508 vertices,
animation currently strictly limited to the position, normal and tangent
attributes.
- Load a morph target using `Mesh::set_morph_targets`
- Add `VisitMorphTargets` and `VisitMorphAttributes` traits to
`bevy_render`, this allows defining morph targets (a fairly complex and
nested data structure) through iterators (ie: single copy instead of
passing around buffers), see documentation of those traits for details
- Add `MorphWeights` component exported by `bevy_render`
- `MorphWeights` control mesh's morph target weights, blending between
various poses defined as morph targets.
- `MorphWeights` are directly inherited by direct children (single level
of hierarchy) of an entity. This allows controlling several mesh
primitives through a unique entity _as per GLTF spec_.
- Add `MorphTargetNames` component, naming each indices of loaded morph
targets.
- Load morph targets weights and buffers in `bevy_gltf`
- handle morph targets animations in `bevy_animation` (previously, it
was a `warn!` log)
- Add the `MorphStressTest.gltf` asset for morph targets testing, taken
from the glTF samples repo, CC0.
- Add morph target manipulation to `scene_viewer`
- Separate the animation code in `scene_viewer` from the rest of the
code, reducing `#[cfg(feature)]` noise
- Add the `morph_targets.rs` example to show off how to manipulate morph
targets, loading `MorpStressTest.gltf`
## Migration Guide
- (very specialized, unlikely to be touched by 3rd parties)
- `MeshPipeline` now has a single `mesh_layouts` field rather than
separate `mesh_layout` and `skinned_mesh_layout` fields. You should
handle all possible mesh bind group layouts in your implementation
- You should also handle properly the new `MORPH_TARGETS` shader def and
mesh pipeline key. A new function is exposed to make this easier:
`setup_moprh_and_skinning_defs`
- The `MeshBindGroup` is now `MeshBindGroups`, cached bind groups are
now accessed through the `get` method.
[1]: https://en.wikipedia.org/wiki/Morph_target_animation
[2]:
https://registry.khronos.org/glTF/specs/2.0/glTF-2.0.html#morph-targets
---------
Co-authored-by: François <mockersf@gmail.com>
Co-authored-by: Carter Anderson <mcanders1@gmail.com>
2023-06-22 20:00:01 +00:00
|
|
|
Read<Handle<Mesh>>,
|
Use GpuArrayBuffer for MeshUniform (#9254)
# Objective
- Reduce the number of rebindings to enable batching of draw commands
## Solution
- Use the new `GpuArrayBuffer` for `MeshUniform` data to store all
`MeshUniform` data in arrays within fewer bindings
- Sort opaque/alpha mask prepass, opaque/alpha mask main, and shadow
phases also by the batch per-object data binding dynamic offset to
improve performance on WebGL2.
---
## Changelog
- Changed: Per-object `MeshUniform` data is now managed by
`GpuArrayBuffer` as arrays in buffers that need to be indexed into.
## Migration Guide
Accessing the `model` member of an individual mesh object's shader
`Mesh` struct the old way where each `MeshUniform` was stored at its own
dynamic offset:
```rust
struct Vertex {
@location(0) position: vec3<f32>,
};
fn vertex(vertex: Vertex) -> VertexOutput {
var out: VertexOutput;
out.clip_position = mesh_position_local_to_clip(
mesh.model,
vec4<f32>(vertex.position, 1.0)
);
return out;
}
```
The new way where one needs to index into the array of `Mesh`es for the
batch:
```rust
struct Vertex {
@builtin(instance_index) instance_index: u32,
@location(0) position: vec3<f32>,
};
fn vertex(vertex: Vertex) -> VertexOutput {
var out: VertexOutput;
out.clip_position = mesh_position_local_to_clip(
mesh[vertex.instance_index].model,
vec4<f32>(vertex.position, 1.0)
);
return out;
}
```
Note that using the instance_index is the default way to pass the
per-object index into the shader, but if you wish to do custom rendering
approaches you can pass it in however you like.
---------
Co-authored-by: robtfm <50659922+robtfm@users.noreply.github.com>
Co-authored-by: Elabajaba <Elabajaba@users.noreply.github.com>
2023-07-30 13:17:08 +00:00
|
|
|
Read<GpuArrayBufferIndex<MeshUniform>>,
|
2023-01-04 01:13:30 +00:00
|
|
|
Option<Read<SkinnedMeshJoints>>,
|
Add morph targets (#8158)
# Objective
- Add morph targets to `bevy_pbr` (closes #5756) & load them from glTF
- Supersedes #3722
- Fixes #6814
[Morph targets][1] (also known as shape interpolation, shape keys, or
blend shapes) allow animating individual vertices with fine grained
controls. This is typically used for facial expressions. By specifying
multiple poses as vertex offset, and providing a set of weight of each
pose, it is possible to define surprisingly realistic transitions
between poses. Blending between multiple poses also allow composition.
Morph targets are part of the [gltf standard][2] and are a feature of
Unity and Unreal, and babylone.js, it is only natural to implement them
in bevy.
## Solution
This implementation of morph targets uses a 3d texture where each pixel
is a component of an animated attribute. Each layer is a different
target. We use a 2d texture for each target, because the number of
attribute×components×animated vertices is expected to always exceed the
maximum pixel row size limit of webGL2. It copies fairly closely the way
skinning is implemented on the CPU side, while on the GPU side, the
shader morph target implementation is a relatively trivial detail.
We add an optional `morph_texture` to the `Mesh` struct. The
`morph_texture` is built through a method that accepts an iterator over
attribute buffers.
The `MorphWeights` component, user-accessible, controls the blend of
poses used by mesh instances (so that multiple copy of the same mesh may
have different weights), all the weights are uploaded to a uniform
buffer of 256 `f32`. We limit to 16 poses per mesh, and a total of 256
poses.
More literature:
* Old babylone.js implementation (vertex attribute-based):
https://www.eternalcoding.com/dev-log-1-morph-targets/
* Babylone.js implementation (similar to ours):
https://www.youtube.com/watch?v=LBPRmGgU0PE
* GPU gems 3:
https://developer.nvidia.com/gpugems/gpugems3/part-i-geometry/chapter-3-directx-10-blend-shapes-breaking-limits
* Development discord thread
https://discord.com/channels/691052431525675048/1083325980615114772
https://user-images.githubusercontent.com/26321040/231181046-3bca2ab2-d4d9-472e-8098-639f1871ce2e.mp4
https://github.com/bevyengine/bevy/assets/26321040/d2a0c544-0ef8-45cf-9f99-8c3792f5a258
## Acknowledgements
* Thanks to `storytold` for sponsoring the feature
* Thanks to `superdump` and `james7132` for guidance and help figuring
out stuff
## Future work
- Handling of less and more attributes (eg: animated uv, animated
arbitrary attributes)
- Dynamic pose allocation (so that zero-weighted poses aren't uploaded
to GPU for example, enables much more total poses)
- Better animation API, see #8357
----
## Changelog
- Add morph targets to bevy meshes
- Support up to 64 poses per mesh of individually up to 116508 vertices,
animation currently strictly limited to the position, normal and tangent
attributes.
- Load a morph target using `Mesh::set_morph_targets`
- Add `VisitMorphTargets` and `VisitMorphAttributes` traits to
`bevy_render`, this allows defining morph targets (a fairly complex and
nested data structure) through iterators (ie: single copy instead of
passing around buffers), see documentation of those traits for details
- Add `MorphWeights` component exported by `bevy_render`
- `MorphWeights` control mesh's morph target weights, blending between
various poses defined as morph targets.
- `MorphWeights` are directly inherited by direct children (single level
of hierarchy) of an entity. This allows controlling several mesh
primitives through a unique entity _as per GLTF spec_.
- Add `MorphTargetNames` component, naming each indices of loaded morph
targets.
- Load morph targets weights and buffers in `bevy_gltf`
- handle morph targets animations in `bevy_animation` (previously, it
was a `warn!` log)
- Add the `MorphStressTest.gltf` asset for morph targets testing, taken
from the glTF samples repo, CC0.
- Add morph target manipulation to `scene_viewer`
- Separate the animation code in `scene_viewer` from the rest of the
code, reducing `#[cfg(feature)]` noise
- Add the `morph_targets.rs` example to show off how to manipulate morph
targets, loading `MorpStressTest.gltf`
## Migration Guide
- (very specialized, unlikely to be touched by 3rd parties)
- `MeshPipeline` now has a single `mesh_layouts` field rather than
separate `mesh_layout` and `skinned_mesh_layout` fields. You should
handle all possible mesh bind group layouts in your implementation
- You should also handle properly the new `MORPH_TARGETS` shader def and
mesh pipeline key. A new function is exposed to make this easier:
`setup_moprh_and_skinning_defs`
- The `MeshBindGroup` is now `MeshBindGroups`, cached bind groups are
now accessed through the `get` method.
[1]: https://en.wikipedia.org/wiki/Morph_target_animation
[2]:
https://registry.khronos.org/glTF/specs/2.0/glTF-2.0.html#morph-targets
---------
Co-authored-by: François <mockersf@gmail.com>
Co-authored-by: Carter Anderson <mcanders1@gmail.com>
2023-06-22 20:00:01 +00:00
|
|
|
Option<Read<MorphIndex>>,
|
Shader Imports. Decouple Mesh logic from PBR (#3137)
## Shader Imports
This adds "whole file" shader imports. These come in two flavors:
### Asset Path Imports
```rust
// /assets/shaders/custom.wgsl
#import "shaders/custom_material.wgsl"
[[stage(fragment)]]
fn fragment() -> [[location(0)]] vec4<f32> {
return get_color();
}
```
```rust
// /assets/shaders/custom_material.wgsl
[[block]]
struct CustomMaterial {
color: vec4<f32>;
};
[[group(1), binding(0)]]
var<uniform> material: CustomMaterial;
```
### Custom Path Imports
Enables defining custom import paths. These are intended to be used by crates to export shader functionality:
```rust
// bevy_pbr2/src/render/pbr.wgsl
#import bevy_pbr::mesh_view_bind_group
#import bevy_pbr::mesh_bind_group
[[block]]
struct StandardMaterial {
base_color: vec4<f32>;
emissive: vec4<f32>;
perceptual_roughness: f32;
metallic: f32;
reflectance: f32;
flags: u32;
};
/* rest of PBR fragment shader here */
```
```rust
impl Plugin for MeshRenderPlugin {
fn build(&self, app: &mut bevy_app::App) {
let mut shaders = app.world.get_resource_mut::<Assets<Shader>>().unwrap();
shaders.set_untracked(
MESH_BIND_GROUP_HANDLE,
Shader::from_wgsl(include_str!("mesh_bind_group.wgsl"))
.with_import_path("bevy_pbr::mesh_bind_group"),
);
shaders.set_untracked(
MESH_VIEW_BIND_GROUP_HANDLE,
Shader::from_wgsl(include_str!("mesh_view_bind_group.wgsl"))
.with_import_path("bevy_pbr::mesh_view_bind_group"),
);
```
By convention these should use rust-style module paths that start with the crate name. Ultimately we might enforce this convention.
Note that this feature implements _run time_ import resolution. Ultimately we should move the import logic into an asset preprocessor once Bevy gets support for that.
## Decouple Mesh Logic from PBR Logic via MeshRenderPlugin
This breaks out mesh rendering code from PBR material code, which improves the legibility of the code, decouples mesh logic from PBR logic, and opens the door for a future `MaterialPlugin<T: Material>` that handles all of the pipeline setup for arbitrary shader materials.
## Removed `RenderAsset<Shader>` in favor of extracting shaders into RenderPipelineCache
This simplifies the shader import implementation and removes the need to pass around `RenderAssets<Shader>`.
## RenderCommands are now fallible
This allows us to cleanly handle pipelines+shaders not being ready yet. We can abort a render command early in these cases, preventing bevy from trying to bind group / do draw calls for pipelines that couldn't be bound. This could also be used in the future for things like "components not existing on entities yet".
# Next Steps
* Investigate using Naga for "partial typed imports" (ex: `#import bevy_pbr::material::StandardMaterial`, which would import only the StandardMaterial struct)
* Implement `MaterialPlugin<T: Material>` for low-boilerplate custom material shaders
* Move shader import logic into the asset preprocessor once bevy gets support for that.
Fixes #3132
2021-11-18 03:45:02 +00:00
|
|
|
);
|
Add morph targets (#8158)
# Objective
- Add morph targets to `bevy_pbr` (closes #5756) & load them from glTF
- Supersedes #3722
- Fixes #6814
[Morph targets][1] (also known as shape interpolation, shape keys, or
blend shapes) allow animating individual vertices with fine grained
controls. This is typically used for facial expressions. By specifying
multiple poses as vertex offset, and providing a set of weight of each
pose, it is possible to define surprisingly realistic transitions
between poses. Blending between multiple poses also allow composition.
Morph targets are part of the [gltf standard][2] and are a feature of
Unity and Unreal, and babylone.js, it is only natural to implement them
in bevy.
## Solution
This implementation of morph targets uses a 3d texture where each pixel
is a component of an animated attribute. Each layer is a different
target. We use a 2d texture for each target, because the number of
attribute×components×animated vertices is expected to always exceed the
maximum pixel row size limit of webGL2. It copies fairly closely the way
skinning is implemented on the CPU side, while on the GPU side, the
shader morph target implementation is a relatively trivial detail.
We add an optional `morph_texture` to the `Mesh` struct. The
`morph_texture` is built through a method that accepts an iterator over
attribute buffers.
The `MorphWeights` component, user-accessible, controls the blend of
poses used by mesh instances (so that multiple copy of the same mesh may
have different weights), all the weights are uploaded to a uniform
buffer of 256 `f32`. We limit to 16 poses per mesh, and a total of 256
poses.
More literature:
* Old babylone.js implementation (vertex attribute-based):
https://www.eternalcoding.com/dev-log-1-morph-targets/
* Babylone.js implementation (similar to ours):
https://www.youtube.com/watch?v=LBPRmGgU0PE
* GPU gems 3:
https://developer.nvidia.com/gpugems/gpugems3/part-i-geometry/chapter-3-directx-10-blend-shapes-breaking-limits
* Development discord thread
https://discord.com/channels/691052431525675048/1083325980615114772
https://user-images.githubusercontent.com/26321040/231181046-3bca2ab2-d4d9-472e-8098-639f1871ce2e.mp4
https://github.com/bevyengine/bevy/assets/26321040/d2a0c544-0ef8-45cf-9f99-8c3792f5a258
## Acknowledgements
* Thanks to `storytold` for sponsoring the feature
* Thanks to `superdump` and `james7132` for guidance and help figuring
out stuff
## Future work
- Handling of less and more attributes (eg: animated uv, animated
arbitrary attributes)
- Dynamic pose allocation (so that zero-weighted poses aren't uploaded
to GPU for example, enables much more total poses)
- Better animation API, see #8357
----
## Changelog
- Add morph targets to bevy meshes
- Support up to 64 poses per mesh of individually up to 116508 vertices,
animation currently strictly limited to the position, normal and tangent
attributes.
- Load a morph target using `Mesh::set_morph_targets`
- Add `VisitMorphTargets` and `VisitMorphAttributes` traits to
`bevy_render`, this allows defining morph targets (a fairly complex and
nested data structure) through iterators (ie: single copy instead of
passing around buffers), see documentation of those traits for details
- Add `MorphWeights` component exported by `bevy_render`
- `MorphWeights` control mesh's morph target weights, blending between
various poses defined as morph targets.
- `MorphWeights` are directly inherited by direct children (single level
of hierarchy) of an entity. This allows controlling several mesh
primitives through a unique entity _as per GLTF spec_.
- Add `MorphTargetNames` component, naming each indices of loaded morph
targets.
- Load morph targets weights and buffers in `bevy_gltf`
- handle morph targets animations in `bevy_animation` (previously, it
was a `warn!` log)
- Add the `MorphStressTest.gltf` asset for morph targets testing, taken
from the glTF samples repo, CC0.
- Add morph target manipulation to `scene_viewer`
- Separate the animation code in `scene_viewer` from the rest of the
code, reducing `#[cfg(feature)]` noise
- Add the `morph_targets.rs` example to show off how to manipulate morph
targets, loading `MorpStressTest.gltf`
## Migration Guide
- (very specialized, unlikely to be touched by 3rd parties)
- `MeshPipeline` now has a single `mesh_layouts` field rather than
separate `mesh_layout` and `skinned_mesh_layout` fields. You should
handle all possible mesh bind group layouts in your implementation
- You should also handle properly the new `MORPH_TARGETS` shader def and
mesh pipeline key. A new function is exposed to make this easier:
`setup_moprh_and_skinning_defs`
- The `MeshBindGroup` is now `MeshBindGroups`, cached bind groups are
now accessed through the `get` method.
[1]: https://en.wikipedia.org/wiki/Morph_target_animation
[2]:
https://registry.khronos.org/glTF/specs/2.0/glTF-2.0.html#morph-targets
---------
Co-authored-by: François <mockersf@gmail.com>
Co-authored-by: Carter Anderson <mcanders1@gmail.com>
2023-06-22 20:00:01 +00:00
|
|
|
|
Shader Imports. Decouple Mesh logic from PBR (#3137)
## Shader Imports
This adds "whole file" shader imports. These come in two flavors:
### Asset Path Imports
```rust
// /assets/shaders/custom.wgsl
#import "shaders/custom_material.wgsl"
[[stage(fragment)]]
fn fragment() -> [[location(0)]] vec4<f32> {
return get_color();
}
```
```rust
// /assets/shaders/custom_material.wgsl
[[block]]
struct CustomMaterial {
color: vec4<f32>;
};
[[group(1), binding(0)]]
var<uniform> material: CustomMaterial;
```
### Custom Path Imports
Enables defining custom import paths. These are intended to be used by crates to export shader functionality:
```rust
// bevy_pbr2/src/render/pbr.wgsl
#import bevy_pbr::mesh_view_bind_group
#import bevy_pbr::mesh_bind_group
[[block]]
struct StandardMaterial {
base_color: vec4<f32>;
emissive: vec4<f32>;
perceptual_roughness: f32;
metallic: f32;
reflectance: f32;
flags: u32;
};
/* rest of PBR fragment shader here */
```
```rust
impl Plugin for MeshRenderPlugin {
fn build(&self, app: &mut bevy_app::App) {
let mut shaders = app.world.get_resource_mut::<Assets<Shader>>().unwrap();
shaders.set_untracked(
MESH_BIND_GROUP_HANDLE,
Shader::from_wgsl(include_str!("mesh_bind_group.wgsl"))
.with_import_path("bevy_pbr::mesh_bind_group"),
);
shaders.set_untracked(
MESH_VIEW_BIND_GROUP_HANDLE,
Shader::from_wgsl(include_str!("mesh_view_bind_group.wgsl"))
.with_import_path("bevy_pbr::mesh_view_bind_group"),
);
```
By convention these should use rust-style module paths that start with the crate name. Ultimately we might enforce this convention.
Note that this feature implements _run time_ import resolution. Ultimately we should move the import logic into an asset preprocessor once Bevy gets support for that.
## Decouple Mesh Logic from PBR Logic via MeshRenderPlugin
This breaks out mesh rendering code from PBR material code, which improves the legibility of the code, decouples mesh logic from PBR logic, and opens the door for a future `MaterialPlugin<T: Material>` that handles all of the pipeline setup for arbitrary shader materials.
## Removed `RenderAsset<Shader>` in favor of extracting shaders into RenderPipelineCache
This simplifies the shader import implementation and removes the need to pass around `RenderAssets<Shader>`.
## RenderCommands are now fallible
This allows us to cleanly handle pipelines+shaders not being ready yet. We can abort a render command early in these cases, preventing bevy from trying to bind group / do draw calls for pipelines that couldn't be bound. This could also be used in the future for things like "components not existing on entities yet".
# Next Steps
* Investigate using Naga for "partial typed imports" (ex: `#import bevy_pbr::material::StandardMaterial`, which would import only the StandardMaterial struct)
* Implement `MaterialPlugin<T: Material>` for low-boilerplate custom material shaders
* Move shader import logic into the asset preprocessor once bevy gets support for that.
Fixes #3132
2021-11-18 03:45:02 +00:00
|
|
|
#[inline]
|
|
|
|
fn render<'w>(
|
2023-01-04 01:13:30 +00:00
|
|
|
_item: &P,
|
|
|
|
_view: (),
|
Use GpuArrayBuffer for MeshUniform (#9254)
# Objective
- Reduce the number of rebindings to enable batching of draw commands
## Solution
- Use the new `GpuArrayBuffer` for `MeshUniform` data to store all
`MeshUniform` data in arrays within fewer bindings
- Sort opaque/alpha mask prepass, opaque/alpha mask main, and shadow
phases also by the batch per-object data binding dynamic offset to
improve performance on WebGL2.
---
## Changelog
- Changed: Per-object `MeshUniform` data is now managed by
`GpuArrayBuffer` as arrays in buffers that need to be indexed into.
## Migration Guide
Accessing the `model` member of an individual mesh object's shader
`Mesh` struct the old way where each `MeshUniform` was stored at its own
dynamic offset:
```rust
struct Vertex {
@location(0) position: vec3<f32>,
};
fn vertex(vertex: Vertex) -> VertexOutput {
var out: VertexOutput;
out.clip_position = mesh_position_local_to_clip(
mesh.model,
vec4<f32>(vertex.position, 1.0)
);
return out;
}
```
The new way where one needs to index into the array of `Mesh`es for the
batch:
```rust
struct Vertex {
@builtin(instance_index) instance_index: u32,
@location(0) position: vec3<f32>,
};
fn vertex(vertex: Vertex) -> VertexOutput {
var out: VertexOutput;
out.clip_position = mesh_position_local_to_clip(
mesh[vertex.instance_index].model,
vec4<f32>(vertex.position, 1.0)
);
return out;
}
```
Note that using the instance_index is the default way to pass the
per-object index into the shader, but if you wish to do custom rendering
approaches you can pass it in however you like.
---------
Co-authored-by: robtfm <50659922+robtfm@users.noreply.github.com>
Co-authored-by: Elabajaba <Elabajaba@users.noreply.github.com>
2023-07-30 13:17:08 +00:00
|
|
|
(mesh, batch_indices, skin_index, morph_index): ROQueryItem<Self::ItemWorldQuery>,
|
Add morph targets (#8158)
# Objective
- Add morph targets to `bevy_pbr` (closes #5756) & load them from glTF
- Supersedes #3722
- Fixes #6814
[Morph targets][1] (also known as shape interpolation, shape keys, or
blend shapes) allow animating individual vertices with fine grained
controls. This is typically used for facial expressions. By specifying
multiple poses as vertex offset, and providing a set of weight of each
pose, it is possible to define surprisingly realistic transitions
between poses. Blending between multiple poses also allow composition.
Morph targets are part of the [gltf standard][2] and are a feature of
Unity and Unreal, and babylone.js, it is only natural to implement them
in bevy.
## Solution
This implementation of morph targets uses a 3d texture where each pixel
is a component of an animated attribute. Each layer is a different
target. We use a 2d texture for each target, because the number of
attribute×components×animated vertices is expected to always exceed the
maximum pixel row size limit of webGL2. It copies fairly closely the way
skinning is implemented on the CPU side, while on the GPU side, the
shader morph target implementation is a relatively trivial detail.
We add an optional `morph_texture` to the `Mesh` struct. The
`morph_texture` is built through a method that accepts an iterator over
attribute buffers.
The `MorphWeights` component, user-accessible, controls the blend of
poses used by mesh instances (so that multiple copy of the same mesh may
have different weights), all the weights are uploaded to a uniform
buffer of 256 `f32`. We limit to 16 poses per mesh, and a total of 256
poses.
More literature:
* Old babylone.js implementation (vertex attribute-based):
https://www.eternalcoding.com/dev-log-1-morph-targets/
* Babylone.js implementation (similar to ours):
https://www.youtube.com/watch?v=LBPRmGgU0PE
* GPU gems 3:
https://developer.nvidia.com/gpugems/gpugems3/part-i-geometry/chapter-3-directx-10-blend-shapes-breaking-limits
* Development discord thread
https://discord.com/channels/691052431525675048/1083325980615114772
https://user-images.githubusercontent.com/26321040/231181046-3bca2ab2-d4d9-472e-8098-639f1871ce2e.mp4
https://github.com/bevyengine/bevy/assets/26321040/d2a0c544-0ef8-45cf-9f99-8c3792f5a258
## Acknowledgements
* Thanks to `storytold` for sponsoring the feature
* Thanks to `superdump` and `james7132` for guidance and help figuring
out stuff
## Future work
- Handling of less and more attributes (eg: animated uv, animated
arbitrary attributes)
- Dynamic pose allocation (so that zero-weighted poses aren't uploaded
to GPU for example, enables much more total poses)
- Better animation API, see #8357
----
## Changelog
- Add morph targets to bevy meshes
- Support up to 64 poses per mesh of individually up to 116508 vertices,
animation currently strictly limited to the position, normal and tangent
attributes.
- Load a morph target using `Mesh::set_morph_targets`
- Add `VisitMorphTargets` and `VisitMorphAttributes` traits to
`bevy_render`, this allows defining morph targets (a fairly complex and
nested data structure) through iterators (ie: single copy instead of
passing around buffers), see documentation of those traits for details
- Add `MorphWeights` component exported by `bevy_render`
- `MorphWeights` control mesh's morph target weights, blending between
various poses defined as morph targets.
- `MorphWeights` are directly inherited by direct children (single level
of hierarchy) of an entity. This allows controlling several mesh
primitives through a unique entity _as per GLTF spec_.
- Add `MorphTargetNames` component, naming each indices of loaded morph
targets.
- Load morph targets weights and buffers in `bevy_gltf`
- handle morph targets animations in `bevy_animation` (previously, it
was a `warn!` log)
- Add the `MorphStressTest.gltf` asset for morph targets testing, taken
from the glTF samples repo, CC0.
- Add morph target manipulation to `scene_viewer`
- Separate the animation code in `scene_viewer` from the rest of the
code, reducing `#[cfg(feature)]` noise
- Add the `morph_targets.rs` example to show off how to manipulate morph
targets, loading `MorpStressTest.gltf`
## Migration Guide
- (very specialized, unlikely to be touched by 3rd parties)
- `MeshPipeline` now has a single `mesh_layouts` field rather than
separate `mesh_layout` and `skinned_mesh_layout` fields. You should
handle all possible mesh bind group layouts in your implementation
- You should also handle properly the new `MORPH_TARGETS` shader def and
mesh pipeline key. A new function is exposed to make this easier:
`setup_moprh_and_skinning_defs`
- The `MeshBindGroup` is now `MeshBindGroups`, cached bind groups are
now accessed through the `get` method.
[1]: https://en.wikipedia.org/wiki/Morph_target_animation
[2]:
https://registry.khronos.org/glTF/specs/2.0/glTF-2.0.html#morph-targets
---------
Co-authored-by: François <mockersf@gmail.com>
Co-authored-by: Carter Anderson <mcanders1@gmail.com>
2023-06-22 20:00:01 +00:00
|
|
|
bind_groups: SystemParamItem<'w, '_, Self::Param>,
|
Shader Imports. Decouple Mesh logic from PBR (#3137)
## Shader Imports
This adds "whole file" shader imports. These come in two flavors:
### Asset Path Imports
```rust
// /assets/shaders/custom.wgsl
#import "shaders/custom_material.wgsl"
[[stage(fragment)]]
fn fragment() -> [[location(0)]] vec4<f32> {
return get_color();
}
```
```rust
// /assets/shaders/custom_material.wgsl
[[block]]
struct CustomMaterial {
color: vec4<f32>;
};
[[group(1), binding(0)]]
var<uniform> material: CustomMaterial;
```
### Custom Path Imports
Enables defining custom import paths. These are intended to be used by crates to export shader functionality:
```rust
// bevy_pbr2/src/render/pbr.wgsl
#import bevy_pbr::mesh_view_bind_group
#import bevy_pbr::mesh_bind_group
[[block]]
struct StandardMaterial {
base_color: vec4<f32>;
emissive: vec4<f32>;
perceptual_roughness: f32;
metallic: f32;
reflectance: f32;
flags: u32;
};
/* rest of PBR fragment shader here */
```
```rust
impl Plugin for MeshRenderPlugin {
fn build(&self, app: &mut bevy_app::App) {
let mut shaders = app.world.get_resource_mut::<Assets<Shader>>().unwrap();
shaders.set_untracked(
MESH_BIND_GROUP_HANDLE,
Shader::from_wgsl(include_str!("mesh_bind_group.wgsl"))
.with_import_path("bevy_pbr::mesh_bind_group"),
);
shaders.set_untracked(
MESH_VIEW_BIND_GROUP_HANDLE,
Shader::from_wgsl(include_str!("mesh_view_bind_group.wgsl"))
.with_import_path("bevy_pbr::mesh_view_bind_group"),
);
```
By convention these should use rust-style module paths that start with the crate name. Ultimately we might enforce this convention.
Note that this feature implements _run time_ import resolution. Ultimately we should move the import logic into an asset preprocessor once Bevy gets support for that.
## Decouple Mesh Logic from PBR Logic via MeshRenderPlugin
This breaks out mesh rendering code from PBR material code, which improves the legibility of the code, decouples mesh logic from PBR logic, and opens the door for a future `MaterialPlugin<T: Material>` that handles all of the pipeline setup for arbitrary shader materials.
## Removed `RenderAsset<Shader>` in favor of extracting shaders into RenderPipelineCache
This simplifies the shader import implementation and removes the need to pass around `RenderAssets<Shader>`.
## RenderCommands are now fallible
This allows us to cleanly handle pipelines+shaders not being ready yet. We can abort a render command early in these cases, preventing bevy from trying to bind group / do draw calls for pipelines that couldn't be bound. This could also be used in the future for things like "components not existing on entities yet".
# Next Steps
* Investigate using Naga for "partial typed imports" (ex: `#import bevy_pbr::material::StandardMaterial`, which would import only the StandardMaterial struct)
* Implement `MaterialPlugin<T: Material>` for low-boilerplate custom material shaders
* Move shader import logic into the asset preprocessor once bevy gets support for that.
Fixes #3132
2021-11-18 03:45:02 +00:00
|
|
|
pass: &mut TrackedRenderPass<'w>,
|
|
|
|
) -> RenderCommandResult {
|
Add morph targets (#8158)
# Objective
- Add morph targets to `bevy_pbr` (closes #5756) & load them from glTF
- Supersedes #3722
- Fixes #6814
[Morph targets][1] (also known as shape interpolation, shape keys, or
blend shapes) allow animating individual vertices with fine grained
controls. This is typically used for facial expressions. By specifying
multiple poses as vertex offset, and providing a set of weight of each
pose, it is possible to define surprisingly realistic transitions
between poses. Blending between multiple poses also allow composition.
Morph targets are part of the [gltf standard][2] and are a feature of
Unity and Unreal, and babylone.js, it is only natural to implement them
in bevy.
## Solution
This implementation of morph targets uses a 3d texture where each pixel
is a component of an animated attribute. Each layer is a different
target. We use a 2d texture for each target, because the number of
attribute×components×animated vertices is expected to always exceed the
maximum pixel row size limit of webGL2. It copies fairly closely the way
skinning is implemented on the CPU side, while on the GPU side, the
shader morph target implementation is a relatively trivial detail.
We add an optional `morph_texture` to the `Mesh` struct. The
`morph_texture` is built through a method that accepts an iterator over
attribute buffers.
The `MorphWeights` component, user-accessible, controls the blend of
poses used by mesh instances (so that multiple copy of the same mesh may
have different weights), all the weights are uploaded to a uniform
buffer of 256 `f32`. We limit to 16 poses per mesh, and a total of 256
poses.
More literature:
* Old babylone.js implementation (vertex attribute-based):
https://www.eternalcoding.com/dev-log-1-morph-targets/
* Babylone.js implementation (similar to ours):
https://www.youtube.com/watch?v=LBPRmGgU0PE
* GPU gems 3:
https://developer.nvidia.com/gpugems/gpugems3/part-i-geometry/chapter-3-directx-10-blend-shapes-breaking-limits
* Development discord thread
https://discord.com/channels/691052431525675048/1083325980615114772
https://user-images.githubusercontent.com/26321040/231181046-3bca2ab2-d4d9-472e-8098-639f1871ce2e.mp4
https://github.com/bevyengine/bevy/assets/26321040/d2a0c544-0ef8-45cf-9f99-8c3792f5a258
## Acknowledgements
* Thanks to `storytold` for sponsoring the feature
* Thanks to `superdump` and `james7132` for guidance and help figuring
out stuff
## Future work
- Handling of less and more attributes (eg: animated uv, animated
arbitrary attributes)
- Dynamic pose allocation (so that zero-weighted poses aren't uploaded
to GPU for example, enables much more total poses)
- Better animation API, see #8357
----
## Changelog
- Add morph targets to bevy meshes
- Support up to 64 poses per mesh of individually up to 116508 vertices,
animation currently strictly limited to the position, normal and tangent
attributes.
- Load a morph target using `Mesh::set_morph_targets`
- Add `VisitMorphTargets` and `VisitMorphAttributes` traits to
`bevy_render`, this allows defining morph targets (a fairly complex and
nested data structure) through iterators (ie: single copy instead of
passing around buffers), see documentation of those traits for details
- Add `MorphWeights` component exported by `bevy_render`
- `MorphWeights` control mesh's morph target weights, blending between
various poses defined as morph targets.
- `MorphWeights` are directly inherited by direct children (single level
of hierarchy) of an entity. This allows controlling several mesh
primitives through a unique entity _as per GLTF spec_.
- Add `MorphTargetNames` component, naming each indices of loaded morph
targets.
- Load morph targets weights and buffers in `bevy_gltf`
- handle morph targets animations in `bevy_animation` (previously, it
was a `warn!` log)
- Add the `MorphStressTest.gltf` asset for morph targets testing, taken
from the glTF samples repo, CC0.
- Add morph target manipulation to `scene_viewer`
- Separate the animation code in `scene_viewer` from the rest of the
code, reducing `#[cfg(feature)]` noise
- Add the `morph_targets.rs` example to show off how to manipulate morph
targets, loading `MorpStressTest.gltf`
## Migration Guide
- (very specialized, unlikely to be touched by 3rd parties)
- `MeshPipeline` now has a single `mesh_layouts` field rather than
separate `mesh_layout` and `skinned_mesh_layout` fields. You should
handle all possible mesh bind group layouts in your implementation
- You should also handle properly the new `MORPH_TARGETS` shader def and
mesh pipeline key. A new function is exposed to make this easier:
`setup_moprh_and_skinning_defs`
- The `MeshBindGroup` is now `MeshBindGroups`, cached bind groups are
now accessed through the `get` method.
[1]: https://en.wikipedia.org/wiki/Morph_target_animation
[2]:
https://registry.khronos.org/glTF/specs/2.0/glTF-2.0.html#morph-targets
---------
Co-authored-by: François <mockersf@gmail.com>
Co-authored-by: Carter Anderson <mcanders1@gmail.com>
2023-06-22 20:00:01 +00:00
|
|
|
let bind_groups = bind_groups.into_inner();
|
|
|
|
let is_skinned = skin_index.is_some();
|
|
|
|
let is_morphed = morph_index.is_some();
|
|
|
|
|
|
|
|
let Some(bind_group) = bind_groups.get(mesh.id(), is_skinned, is_morphed) else {
|
|
|
|
error!(
|
|
|
|
"The MeshBindGroups resource wasn't set in the render phase. \
|
|
|
|
It should be set by the queue_mesh_bind_group system.\n\
|
|
|
|
This is a bevy bug! Please open an issue."
|
2022-03-29 18:31:13 +00:00
|
|
|
);
|
Add morph targets (#8158)
# Objective
- Add morph targets to `bevy_pbr` (closes #5756) & load them from glTF
- Supersedes #3722
- Fixes #6814
[Morph targets][1] (also known as shape interpolation, shape keys, or
blend shapes) allow animating individual vertices with fine grained
controls. This is typically used for facial expressions. By specifying
multiple poses as vertex offset, and providing a set of weight of each
pose, it is possible to define surprisingly realistic transitions
between poses. Blending between multiple poses also allow composition.
Morph targets are part of the [gltf standard][2] and are a feature of
Unity and Unreal, and babylone.js, it is only natural to implement them
in bevy.
## Solution
This implementation of morph targets uses a 3d texture where each pixel
is a component of an animated attribute. Each layer is a different
target. We use a 2d texture for each target, because the number of
attribute×components×animated vertices is expected to always exceed the
maximum pixel row size limit of webGL2. It copies fairly closely the way
skinning is implemented on the CPU side, while on the GPU side, the
shader morph target implementation is a relatively trivial detail.
We add an optional `morph_texture` to the `Mesh` struct. The
`morph_texture` is built through a method that accepts an iterator over
attribute buffers.
The `MorphWeights` component, user-accessible, controls the blend of
poses used by mesh instances (so that multiple copy of the same mesh may
have different weights), all the weights are uploaded to a uniform
buffer of 256 `f32`. We limit to 16 poses per mesh, and a total of 256
poses.
More literature:
* Old babylone.js implementation (vertex attribute-based):
https://www.eternalcoding.com/dev-log-1-morph-targets/
* Babylone.js implementation (similar to ours):
https://www.youtube.com/watch?v=LBPRmGgU0PE
* GPU gems 3:
https://developer.nvidia.com/gpugems/gpugems3/part-i-geometry/chapter-3-directx-10-blend-shapes-breaking-limits
* Development discord thread
https://discord.com/channels/691052431525675048/1083325980615114772
https://user-images.githubusercontent.com/26321040/231181046-3bca2ab2-d4d9-472e-8098-639f1871ce2e.mp4
https://github.com/bevyengine/bevy/assets/26321040/d2a0c544-0ef8-45cf-9f99-8c3792f5a258
## Acknowledgements
* Thanks to `storytold` for sponsoring the feature
* Thanks to `superdump` and `james7132` for guidance and help figuring
out stuff
## Future work
- Handling of less and more attributes (eg: animated uv, animated
arbitrary attributes)
- Dynamic pose allocation (so that zero-weighted poses aren't uploaded
to GPU for example, enables much more total poses)
- Better animation API, see #8357
----
## Changelog
- Add morph targets to bevy meshes
- Support up to 64 poses per mesh of individually up to 116508 vertices,
animation currently strictly limited to the position, normal and tangent
attributes.
- Load a morph target using `Mesh::set_morph_targets`
- Add `VisitMorphTargets` and `VisitMorphAttributes` traits to
`bevy_render`, this allows defining morph targets (a fairly complex and
nested data structure) through iterators (ie: single copy instead of
passing around buffers), see documentation of those traits for details
- Add `MorphWeights` component exported by `bevy_render`
- `MorphWeights` control mesh's morph target weights, blending between
various poses defined as morph targets.
- `MorphWeights` are directly inherited by direct children (single level
of hierarchy) of an entity. This allows controlling several mesh
primitives through a unique entity _as per GLTF spec_.
- Add `MorphTargetNames` component, naming each indices of loaded morph
targets.
- Load morph targets weights and buffers in `bevy_gltf`
- handle morph targets animations in `bevy_animation` (previously, it
was a `warn!` log)
- Add the `MorphStressTest.gltf` asset for morph targets testing, taken
from the glTF samples repo, CC0.
- Add morph target manipulation to `scene_viewer`
- Separate the animation code in `scene_viewer` from the rest of the
code, reducing `#[cfg(feature)]` noise
- Add the `morph_targets.rs` example to show off how to manipulate morph
targets, loading `MorpStressTest.gltf`
## Migration Guide
- (very specialized, unlikely to be touched by 3rd parties)
- `MeshPipeline` now has a single `mesh_layouts` field rather than
separate `mesh_layout` and `skinned_mesh_layout` fields. You should
handle all possible mesh bind group layouts in your implementation
- You should also handle properly the new `MORPH_TARGETS` shader def and
mesh pipeline key. A new function is exposed to make this easier:
`setup_moprh_and_skinning_defs`
- The `MeshBindGroup` is now `MeshBindGroups`, cached bind groups are
now accessed through the `get` method.
[1]: https://en.wikipedia.org/wiki/Morph_target_animation
[2]:
https://registry.khronos.org/glTF/specs/2.0/glTF-2.0.html#morph-targets
---------
Co-authored-by: François <mockersf@gmail.com>
Co-authored-by: Carter Anderson <mcanders1@gmail.com>
2023-06-22 20:00:01 +00:00
|
|
|
return RenderCommandResult::Failure;
|
|
|
|
};
|
Use GpuArrayBuffer for MeshUniform (#9254)
# Objective
- Reduce the number of rebindings to enable batching of draw commands
## Solution
- Use the new `GpuArrayBuffer` for `MeshUniform` data to store all
`MeshUniform` data in arrays within fewer bindings
- Sort opaque/alpha mask prepass, opaque/alpha mask main, and shadow
phases also by the batch per-object data binding dynamic offset to
improve performance on WebGL2.
---
## Changelog
- Changed: Per-object `MeshUniform` data is now managed by
`GpuArrayBuffer` as arrays in buffers that need to be indexed into.
## Migration Guide
Accessing the `model` member of an individual mesh object's shader
`Mesh` struct the old way where each `MeshUniform` was stored at its own
dynamic offset:
```rust
struct Vertex {
@location(0) position: vec3<f32>,
};
fn vertex(vertex: Vertex) -> VertexOutput {
var out: VertexOutput;
out.clip_position = mesh_position_local_to_clip(
mesh.model,
vec4<f32>(vertex.position, 1.0)
);
return out;
}
```
The new way where one needs to index into the array of `Mesh`es for the
batch:
```rust
struct Vertex {
@builtin(instance_index) instance_index: u32,
@location(0) position: vec3<f32>,
};
fn vertex(vertex: Vertex) -> VertexOutput {
var out: VertexOutput;
out.clip_position = mesh_position_local_to_clip(
mesh[vertex.instance_index].model,
vec4<f32>(vertex.position, 1.0)
);
return out;
}
```
Note that using the instance_index is the default way to pass the
per-object index into the shader, but if you wish to do custom rendering
approaches you can pass it in however you like.
---------
Co-authored-by: robtfm <50659922+robtfm@users.noreply.github.com>
Co-authored-by: Elabajaba <Elabajaba@users.noreply.github.com>
2023-07-30 13:17:08 +00:00
|
|
|
|
|
|
|
let mut dynamic_offsets: [u32; 3] = Default::default();
|
|
|
|
let mut index_count = 0;
|
|
|
|
if let Some(mesh_index) = batch_indices.dynamic_offset {
|
|
|
|
dynamic_offsets[index_count] = mesh_index;
|
|
|
|
index_count += 1;
|
|
|
|
}
|
|
|
|
if let Some(skin_index) = skin_index {
|
|
|
|
dynamic_offsets[index_count] = skin_index.index;
|
|
|
|
index_count += 1;
|
|
|
|
}
|
|
|
|
if let Some(morph_index) = morph_index {
|
|
|
|
dynamic_offsets[index_count] = morph_index.index;
|
|
|
|
index_count += 1;
|
|
|
|
}
|
|
|
|
pass.set_bind_group(I, bind_group, &dynamic_offsets[0..index_count]);
|
|
|
|
|
Shader Imports. Decouple Mesh logic from PBR (#3137)
## Shader Imports
This adds "whole file" shader imports. These come in two flavors:
### Asset Path Imports
```rust
// /assets/shaders/custom.wgsl
#import "shaders/custom_material.wgsl"
[[stage(fragment)]]
fn fragment() -> [[location(0)]] vec4<f32> {
return get_color();
}
```
```rust
// /assets/shaders/custom_material.wgsl
[[block]]
struct CustomMaterial {
color: vec4<f32>;
};
[[group(1), binding(0)]]
var<uniform> material: CustomMaterial;
```
### Custom Path Imports
Enables defining custom import paths. These are intended to be used by crates to export shader functionality:
```rust
// bevy_pbr2/src/render/pbr.wgsl
#import bevy_pbr::mesh_view_bind_group
#import bevy_pbr::mesh_bind_group
[[block]]
struct StandardMaterial {
base_color: vec4<f32>;
emissive: vec4<f32>;
perceptual_roughness: f32;
metallic: f32;
reflectance: f32;
flags: u32;
};
/* rest of PBR fragment shader here */
```
```rust
impl Plugin for MeshRenderPlugin {
fn build(&self, app: &mut bevy_app::App) {
let mut shaders = app.world.get_resource_mut::<Assets<Shader>>().unwrap();
shaders.set_untracked(
MESH_BIND_GROUP_HANDLE,
Shader::from_wgsl(include_str!("mesh_bind_group.wgsl"))
.with_import_path("bevy_pbr::mesh_bind_group"),
);
shaders.set_untracked(
MESH_VIEW_BIND_GROUP_HANDLE,
Shader::from_wgsl(include_str!("mesh_view_bind_group.wgsl"))
.with_import_path("bevy_pbr::mesh_view_bind_group"),
);
```
By convention these should use rust-style module paths that start with the crate name. Ultimately we might enforce this convention.
Note that this feature implements _run time_ import resolution. Ultimately we should move the import logic into an asset preprocessor once Bevy gets support for that.
## Decouple Mesh Logic from PBR Logic via MeshRenderPlugin
This breaks out mesh rendering code from PBR material code, which improves the legibility of the code, decouples mesh logic from PBR logic, and opens the door for a future `MaterialPlugin<T: Material>` that handles all of the pipeline setup for arbitrary shader materials.
## Removed `RenderAsset<Shader>` in favor of extracting shaders into RenderPipelineCache
This simplifies the shader import implementation and removes the need to pass around `RenderAssets<Shader>`.
## RenderCommands are now fallible
This allows us to cleanly handle pipelines+shaders not being ready yet. We can abort a render command early in these cases, preventing bevy from trying to bind group / do draw calls for pipelines that couldn't be bound. This could also be used in the future for things like "components not existing on entities yet".
# Next Steps
* Investigate using Naga for "partial typed imports" (ex: `#import bevy_pbr::material::StandardMaterial`, which would import only the StandardMaterial struct)
* Implement `MaterialPlugin<T: Material>` for low-boilerplate custom material shaders
* Move shader import logic into the asset preprocessor once bevy gets support for that.
Fixes #3132
2021-11-18 03:45:02 +00:00
|
|
|
RenderCommandResult::Success
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
pub struct DrawMesh;
|
2023-01-04 01:13:30 +00:00
|
|
|
impl<P: PhaseItem> RenderCommand<P> for DrawMesh {
|
|
|
|
type Param = SRes<RenderAssets<Mesh>>;
|
|
|
|
type ViewWorldQuery = ();
|
Use GpuArrayBuffer for MeshUniform (#9254)
# Objective
- Reduce the number of rebindings to enable batching of draw commands
## Solution
- Use the new `GpuArrayBuffer` for `MeshUniform` data to store all
`MeshUniform` data in arrays within fewer bindings
- Sort opaque/alpha mask prepass, opaque/alpha mask main, and shadow
phases also by the batch per-object data binding dynamic offset to
improve performance on WebGL2.
---
## Changelog
- Changed: Per-object `MeshUniform` data is now managed by
`GpuArrayBuffer` as arrays in buffers that need to be indexed into.
## Migration Guide
Accessing the `model` member of an individual mesh object's shader
`Mesh` struct the old way where each `MeshUniform` was stored at its own
dynamic offset:
```rust
struct Vertex {
@location(0) position: vec3<f32>,
};
fn vertex(vertex: Vertex) -> VertexOutput {
var out: VertexOutput;
out.clip_position = mesh_position_local_to_clip(
mesh.model,
vec4<f32>(vertex.position, 1.0)
);
return out;
}
```
The new way where one needs to index into the array of `Mesh`es for the
batch:
```rust
struct Vertex {
@builtin(instance_index) instance_index: u32,
@location(0) position: vec3<f32>,
};
fn vertex(vertex: Vertex) -> VertexOutput {
var out: VertexOutput;
out.clip_position = mesh_position_local_to_clip(
mesh[vertex.instance_index].model,
vec4<f32>(vertex.position, 1.0)
);
return out;
}
```
Note that using the instance_index is the default way to pass the
per-object index into the shader, but if you wish to do custom rendering
approaches you can pass it in however you like.
---------
Co-authored-by: robtfm <50659922+robtfm@users.noreply.github.com>
Co-authored-by: Elabajaba <Elabajaba@users.noreply.github.com>
2023-07-30 13:17:08 +00:00
|
|
|
type ItemWorldQuery = (Read<GpuArrayBufferIndex<MeshUniform>>, Read<Handle<Mesh>>);
|
Shader Imports. Decouple Mesh logic from PBR (#3137)
## Shader Imports
This adds "whole file" shader imports. These come in two flavors:
### Asset Path Imports
```rust
// /assets/shaders/custom.wgsl
#import "shaders/custom_material.wgsl"
[[stage(fragment)]]
fn fragment() -> [[location(0)]] vec4<f32> {
return get_color();
}
```
```rust
// /assets/shaders/custom_material.wgsl
[[block]]
struct CustomMaterial {
color: vec4<f32>;
};
[[group(1), binding(0)]]
var<uniform> material: CustomMaterial;
```
### Custom Path Imports
Enables defining custom import paths. These are intended to be used by crates to export shader functionality:
```rust
// bevy_pbr2/src/render/pbr.wgsl
#import bevy_pbr::mesh_view_bind_group
#import bevy_pbr::mesh_bind_group
[[block]]
struct StandardMaterial {
base_color: vec4<f32>;
emissive: vec4<f32>;
perceptual_roughness: f32;
metallic: f32;
reflectance: f32;
flags: u32;
};
/* rest of PBR fragment shader here */
```
```rust
impl Plugin for MeshRenderPlugin {
fn build(&self, app: &mut bevy_app::App) {
let mut shaders = app.world.get_resource_mut::<Assets<Shader>>().unwrap();
shaders.set_untracked(
MESH_BIND_GROUP_HANDLE,
Shader::from_wgsl(include_str!("mesh_bind_group.wgsl"))
.with_import_path("bevy_pbr::mesh_bind_group"),
);
shaders.set_untracked(
MESH_VIEW_BIND_GROUP_HANDLE,
Shader::from_wgsl(include_str!("mesh_view_bind_group.wgsl"))
.with_import_path("bevy_pbr::mesh_view_bind_group"),
);
```
By convention these should use rust-style module paths that start with the crate name. Ultimately we might enforce this convention.
Note that this feature implements _run time_ import resolution. Ultimately we should move the import logic into an asset preprocessor once Bevy gets support for that.
## Decouple Mesh Logic from PBR Logic via MeshRenderPlugin
This breaks out mesh rendering code from PBR material code, which improves the legibility of the code, decouples mesh logic from PBR logic, and opens the door for a future `MaterialPlugin<T: Material>` that handles all of the pipeline setup for arbitrary shader materials.
## Removed `RenderAsset<Shader>` in favor of extracting shaders into RenderPipelineCache
This simplifies the shader import implementation and removes the need to pass around `RenderAssets<Shader>`.
## RenderCommands are now fallible
This allows us to cleanly handle pipelines+shaders not being ready yet. We can abort a render command early in these cases, preventing bevy from trying to bind group / do draw calls for pipelines that couldn't be bound. This could also be used in the future for things like "components not existing on entities yet".
# Next Steps
* Investigate using Naga for "partial typed imports" (ex: `#import bevy_pbr::material::StandardMaterial`, which would import only the StandardMaterial struct)
* Implement `MaterialPlugin<T: Material>` for low-boilerplate custom material shaders
* Move shader import logic into the asset preprocessor once bevy gets support for that.
Fixes #3132
2021-11-18 03:45:02 +00:00
|
|
|
#[inline]
|
|
|
|
fn render<'w>(
|
2023-01-04 01:13:30 +00:00
|
|
|
_item: &P,
|
|
|
|
_view: (),
|
Use GpuArrayBuffer for MeshUniform (#9254)
# Objective
- Reduce the number of rebindings to enable batching of draw commands
## Solution
- Use the new `GpuArrayBuffer` for `MeshUniform` data to store all
`MeshUniform` data in arrays within fewer bindings
- Sort opaque/alpha mask prepass, opaque/alpha mask main, and shadow
phases also by the batch per-object data binding dynamic offset to
improve performance on WebGL2.
---
## Changelog
- Changed: Per-object `MeshUniform` data is now managed by
`GpuArrayBuffer` as arrays in buffers that need to be indexed into.
## Migration Guide
Accessing the `model` member of an individual mesh object's shader
`Mesh` struct the old way where each `MeshUniform` was stored at its own
dynamic offset:
```rust
struct Vertex {
@location(0) position: vec3<f32>,
};
fn vertex(vertex: Vertex) -> VertexOutput {
var out: VertexOutput;
out.clip_position = mesh_position_local_to_clip(
mesh.model,
vec4<f32>(vertex.position, 1.0)
);
return out;
}
```
The new way where one needs to index into the array of `Mesh`es for the
batch:
```rust
struct Vertex {
@builtin(instance_index) instance_index: u32,
@location(0) position: vec3<f32>,
};
fn vertex(vertex: Vertex) -> VertexOutput {
var out: VertexOutput;
out.clip_position = mesh_position_local_to_clip(
mesh[vertex.instance_index].model,
vec4<f32>(vertex.position, 1.0)
);
return out;
}
```
Note that using the instance_index is the default way to pass the
per-object index into the shader, but if you wish to do custom rendering
approaches you can pass it in however you like.
---------
Co-authored-by: robtfm <50659922+robtfm@users.noreply.github.com>
Co-authored-by: Elabajaba <Elabajaba@users.noreply.github.com>
2023-07-30 13:17:08 +00:00
|
|
|
(batch_indices, mesh_handle): ROQueryItem<'_, Self::ItemWorldQuery>,
|
2023-01-04 01:13:30 +00:00
|
|
|
meshes: SystemParamItem<'w, '_, Self::Param>,
|
Shader Imports. Decouple Mesh logic from PBR (#3137)
## Shader Imports
This adds "whole file" shader imports. These come in two flavors:
### Asset Path Imports
```rust
// /assets/shaders/custom.wgsl
#import "shaders/custom_material.wgsl"
[[stage(fragment)]]
fn fragment() -> [[location(0)]] vec4<f32> {
return get_color();
}
```
```rust
// /assets/shaders/custom_material.wgsl
[[block]]
struct CustomMaterial {
color: vec4<f32>;
};
[[group(1), binding(0)]]
var<uniform> material: CustomMaterial;
```
### Custom Path Imports
Enables defining custom import paths. These are intended to be used by crates to export shader functionality:
```rust
// bevy_pbr2/src/render/pbr.wgsl
#import bevy_pbr::mesh_view_bind_group
#import bevy_pbr::mesh_bind_group
[[block]]
struct StandardMaterial {
base_color: vec4<f32>;
emissive: vec4<f32>;
perceptual_roughness: f32;
metallic: f32;
reflectance: f32;
flags: u32;
};
/* rest of PBR fragment shader here */
```
```rust
impl Plugin for MeshRenderPlugin {
fn build(&self, app: &mut bevy_app::App) {
let mut shaders = app.world.get_resource_mut::<Assets<Shader>>().unwrap();
shaders.set_untracked(
MESH_BIND_GROUP_HANDLE,
Shader::from_wgsl(include_str!("mesh_bind_group.wgsl"))
.with_import_path("bevy_pbr::mesh_bind_group"),
);
shaders.set_untracked(
MESH_VIEW_BIND_GROUP_HANDLE,
Shader::from_wgsl(include_str!("mesh_view_bind_group.wgsl"))
.with_import_path("bevy_pbr::mesh_view_bind_group"),
);
```
By convention these should use rust-style module paths that start with the crate name. Ultimately we might enforce this convention.
Note that this feature implements _run time_ import resolution. Ultimately we should move the import logic into an asset preprocessor once Bevy gets support for that.
## Decouple Mesh Logic from PBR Logic via MeshRenderPlugin
This breaks out mesh rendering code from PBR material code, which improves the legibility of the code, decouples mesh logic from PBR logic, and opens the door for a future `MaterialPlugin<T: Material>` that handles all of the pipeline setup for arbitrary shader materials.
## Removed `RenderAsset<Shader>` in favor of extracting shaders into RenderPipelineCache
This simplifies the shader import implementation and removes the need to pass around `RenderAssets<Shader>`.
## RenderCommands are now fallible
This allows us to cleanly handle pipelines+shaders not being ready yet. We can abort a render command early in these cases, preventing bevy from trying to bind group / do draw calls for pipelines that couldn't be bound. This could also be used in the future for things like "components not existing on entities yet".
# Next Steps
* Investigate using Naga for "partial typed imports" (ex: `#import bevy_pbr::material::StandardMaterial`, which would import only the StandardMaterial struct)
* Implement `MaterialPlugin<T: Material>` for low-boilerplate custom material shaders
* Move shader import logic into the asset preprocessor once bevy gets support for that.
Fixes #3132
2021-11-18 03:45:02 +00:00
|
|
|
pass: &mut TrackedRenderPass<'w>,
|
|
|
|
) -> RenderCommandResult {
|
2021-12-14 03:58:23 +00:00
|
|
|
if let Some(gpu_mesh) = meshes.into_inner().get(mesh_handle) {
|
|
|
|
pass.set_vertex_buffer(0, gpu_mesh.vertex_buffer.slice(..));
|
2023-08-09 18:38:45 +00:00
|
|
|
#[cfg(all(feature = "webgl", target_arch = "wasm32"))]
|
|
|
|
pass.set_push_constants(
|
|
|
|
ShaderStages::VERTEX,
|
|
|
|
0,
|
|
|
|
&(batch_indices.index as i32).to_le_bytes(),
|
|
|
|
);
|
2021-12-23 19:19:13 +00:00
|
|
|
match &gpu_mesh.buffer_info {
|
|
|
|
GpuBufferInfo::Indexed {
|
|
|
|
buffer,
|
|
|
|
index_format,
|
|
|
|
count,
|
|
|
|
} => {
|
|
|
|
pass.set_index_buffer(buffer.slice(..), 0, *index_format);
|
Use GpuArrayBuffer for MeshUniform (#9254)
# Objective
- Reduce the number of rebindings to enable batching of draw commands
## Solution
- Use the new `GpuArrayBuffer` for `MeshUniform` data to store all
`MeshUniform` data in arrays within fewer bindings
- Sort opaque/alpha mask prepass, opaque/alpha mask main, and shadow
phases also by the batch per-object data binding dynamic offset to
improve performance on WebGL2.
---
## Changelog
- Changed: Per-object `MeshUniform` data is now managed by
`GpuArrayBuffer` as arrays in buffers that need to be indexed into.
## Migration Guide
Accessing the `model` member of an individual mesh object's shader
`Mesh` struct the old way where each `MeshUniform` was stored at its own
dynamic offset:
```rust
struct Vertex {
@location(0) position: vec3<f32>,
};
fn vertex(vertex: Vertex) -> VertexOutput {
var out: VertexOutput;
out.clip_position = mesh_position_local_to_clip(
mesh.model,
vec4<f32>(vertex.position, 1.0)
);
return out;
}
```
The new way where one needs to index into the array of `Mesh`es for the
batch:
```rust
struct Vertex {
@builtin(instance_index) instance_index: u32,
@location(0) position: vec3<f32>,
};
fn vertex(vertex: Vertex) -> VertexOutput {
var out: VertexOutput;
out.clip_position = mesh_position_local_to_clip(
mesh[vertex.instance_index].model,
vec4<f32>(vertex.position, 1.0)
);
return out;
}
```
Note that using the instance_index is the default way to pass the
per-object index into the shader, but if you wish to do custom rendering
approaches you can pass it in however you like.
---------
Co-authored-by: robtfm <50659922+robtfm@users.noreply.github.com>
Co-authored-by: Elabajaba <Elabajaba@users.noreply.github.com>
2023-07-30 13:17:08 +00:00
|
|
|
pass.draw_indexed(0..*count, 0, batch_indices.index..batch_indices.index + 1);
|
2021-12-23 19:19:13 +00:00
|
|
|
}
|
2023-04-22 17:28:58 +00:00
|
|
|
GpuBufferInfo::NonIndexed => {
|
Use GpuArrayBuffer for MeshUniform (#9254)
# Objective
- Reduce the number of rebindings to enable batching of draw commands
## Solution
- Use the new `GpuArrayBuffer` for `MeshUniform` data to store all
`MeshUniform` data in arrays within fewer bindings
- Sort opaque/alpha mask prepass, opaque/alpha mask main, and shadow
phases also by the batch per-object data binding dynamic offset to
improve performance on WebGL2.
---
## Changelog
- Changed: Per-object `MeshUniform` data is now managed by
`GpuArrayBuffer` as arrays in buffers that need to be indexed into.
## Migration Guide
Accessing the `model` member of an individual mesh object's shader
`Mesh` struct the old way where each `MeshUniform` was stored at its own
dynamic offset:
```rust
struct Vertex {
@location(0) position: vec3<f32>,
};
fn vertex(vertex: Vertex) -> VertexOutput {
var out: VertexOutput;
out.clip_position = mesh_position_local_to_clip(
mesh.model,
vec4<f32>(vertex.position, 1.0)
);
return out;
}
```
The new way where one needs to index into the array of `Mesh`es for the
batch:
```rust
struct Vertex {
@builtin(instance_index) instance_index: u32,
@location(0) position: vec3<f32>,
};
fn vertex(vertex: Vertex) -> VertexOutput {
var out: VertexOutput;
out.clip_position = mesh_position_local_to_clip(
mesh[vertex.instance_index].model,
vec4<f32>(vertex.position, 1.0)
);
return out;
}
```
Note that using the instance_index is the default way to pass the
per-object index into the shader, but if you wish to do custom rendering
approaches you can pass it in however you like.
---------
Co-authored-by: robtfm <50659922+robtfm@users.noreply.github.com>
Co-authored-by: Elabajaba <Elabajaba@users.noreply.github.com>
2023-07-30 13:17:08 +00:00
|
|
|
pass.draw(
|
|
|
|
0..gpu_mesh.vertex_count,
|
|
|
|
batch_indices.index..batch_indices.index + 1,
|
|
|
|
);
|
2021-12-23 19:19:13 +00:00
|
|
|
}
|
2021-12-14 03:58:23 +00:00
|
|
|
}
|
|
|
|
RenderCommandResult::Success
|
Shader Imports. Decouple Mesh logic from PBR (#3137)
## Shader Imports
This adds "whole file" shader imports. These come in two flavors:
### Asset Path Imports
```rust
// /assets/shaders/custom.wgsl
#import "shaders/custom_material.wgsl"
[[stage(fragment)]]
fn fragment() -> [[location(0)]] vec4<f32> {
return get_color();
}
```
```rust
// /assets/shaders/custom_material.wgsl
[[block]]
struct CustomMaterial {
color: vec4<f32>;
};
[[group(1), binding(0)]]
var<uniform> material: CustomMaterial;
```
### Custom Path Imports
Enables defining custom import paths. These are intended to be used by crates to export shader functionality:
```rust
// bevy_pbr2/src/render/pbr.wgsl
#import bevy_pbr::mesh_view_bind_group
#import bevy_pbr::mesh_bind_group
[[block]]
struct StandardMaterial {
base_color: vec4<f32>;
emissive: vec4<f32>;
perceptual_roughness: f32;
metallic: f32;
reflectance: f32;
flags: u32;
};
/* rest of PBR fragment shader here */
```
```rust
impl Plugin for MeshRenderPlugin {
fn build(&self, app: &mut bevy_app::App) {
let mut shaders = app.world.get_resource_mut::<Assets<Shader>>().unwrap();
shaders.set_untracked(
MESH_BIND_GROUP_HANDLE,
Shader::from_wgsl(include_str!("mesh_bind_group.wgsl"))
.with_import_path("bevy_pbr::mesh_bind_group"),
);
shaders.set_untracked(
MESH_VIEW_BIND_GROUP_HANDLE,
Shader::from_wgsl(include_str!("mesh_view_bind_group.wgsl"))
.with_import_path("bevy_pbr::mesh_view_bind_group"),
);
```
By convention these should use rust-style module paths that start with the crate name. Ultimately we might enforce this convention.
Note that this feature implements _run time_ import resolution. Ultimately we should move the import logic into an asset preprocessor once Bevy gets support for that.
## Decouple Mesh Logic from PBR Logic via MeshRenderPlugin
This breaks out mesh rendering code from PBR material code, which improves the legibility of the code, decouples mesh logic from PBR logic, and opens the door for a future `MaterialPlugin<T: Material>` that handles all of the pipeline setup for arbitrary shader materials.
## Removed `RenderAsset<Shader>` in favor of extracting shaders into RenderPipelineCache
This simplifies the shader import implementation and removes the need to pass around `RenderAssets<Shader>`.
## RenderCommands are now fallible
This allows us to cleanly handle pipelines+shaders not being ready yet. We can abort a render command early in these cases, preventing bevy from trying to bind group / do draw calls for pipelines that couldn't be bound. This could also be used in the future for things like "components not existing on entities yet".
# Next Steps
* Investigate using Naga for "partial typed imports" (ex: `#import bevy_pbr::material::StandardMaterial`, which would import only the StandardMaterial struct)
* Implement `MaterialPlugin<T: Material>` for low-boilerplate custom material shaders
* Move shader import logic into the asset preprocessor once bevy gets support for that.
Fixes #3132
2021-11-18 03:45:02 +00:00
|
|
|
} else {
|
2021-12-14 03:58:23 +00:00
|
|
|
RenderCommandResult::Failure
|
Shader Imports. Decouple Mesh logic from PBR (#3137)
## Shader Imports
This adds "whole file" shader imports. These come in two flavors:
### Asset Path Imports
```rust
// /assets/shaders/custom.wgsl
#import "shaders/custom_material.wgsl"
[[stage(fragment)]]
fn fragment() -> [[location(0)]] vec4<f32> {
return get_color();
}
```
```rust
// /assets/shaders/custom_material.wgsl
[[block]]
struct CustomMaterial {
color: vec4<f32>;
};
[[group(1), binding(0)]]
var<uniform> material: CustomMaterial;
```
### Custom Path Imports
Enables defining custom import paths. These are intended to be used by crates to export shader functionality:
```rust
// bevy_pbr2/src/render/pbr.wgsl
#import bevy_pbr::mesh_view_bind_group
#import bevy_pbr::mesh_bind_group
[[block]]
struct StandardMaterial {
base_color: vec4<f32>;
emissive: vec4<f32>;
perceptual_roughness: f32;
metallic: f32;
reflectance: f32;
flags: u32;
};
/* rest of PBR fragment shader here */
```
```rust
impl Plugin for MeshRenderPlugin {
fn build(&self, app: &mut bevy_app::App) {
let mut shaders = app.world.get_resource_mut::<Assets<Shader>>().unwrap();
shaders.set_untracked(
MESH_BIND_GROUP_HANDLE,
Shader::from_wgsl(include_str!("mesh_bind_group.wgsl"))
.with_import_path("bevy_pbr::mesh_bind_group"),
);
shaders.set_untracked(
MESH_VIEW_BIND_GROUP_HANDLE,
Shader::from_wgsl(include_str!("mesh_view_bind_group.wgsl"))
.with_import_path("bevy_pbr::mesh_view_bind_group"),
);
```
By convention these should use rust-style module paths that start with the crate name. Ultimately we might enforce this convention.
Note that this feature implements _run time_ import resolution. Ultimately we should move the import logic into an asset preprocessor once Bevy gets support for that.
## Decouple Mesh Logic from PBR Logic via MeshRenderPlugin
This breaks out mesh rendering code from PBR material code, which improves the legibility of the code, decouples mesh logic from PBR logic, and opens the door for a future `MaterialPlugin<T: Material>` that handles all of the pipeline setup for arbitrary shader materials.
## Removed `RenderAsset<Shader>` in favor of extracting shaders into RenderPipelineCache
This simplifies the shader import implementation and removes the need to pass around `RenderAssets<Shader>`.
## RenderCommands are now fallible
This allows us to cleanly handle pipelines+shaders not being ready yet. We can abort a render command early in these cases, preventing bevy from trying to bind group / do draw calls for pipelines that couldn't be bound. This could also be used in the future for things like "components not existing on entities yet".
# Next Steps
* Investigate using Naga for "partial typed imports" (ex: `#import bevy_pbr::material::StandardMaterial`, which would import only the StandardMaterial struct)
* Implement `MaterialPlugin<T: Material>` for low-boilerplate custom material shaders
* Move shader import logic into the asset preprocessor once bevy gets support for that.
Fixes #3132
2021-11-18 03:45:02 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
#[cfg(test)]
|
|
|
|
mod tests {
|
|
|
|
use super::MeshPipelineKey;
|
|
|
|
#[test]
|
|
|
|
fn mesh_key_msaa_samples() {
|
2022-08-30 03:00:39 +00:00
|
|
|
for i in [1, 2, 4, 8, 16, 32, 64, 128] {
|
Shader Imports. Decouple Mesh logic from PBR (#3137)
## Shader Imports
This adds "whole file" shader imports. These come in two flavors:
### Asset Path Imports
```rust
// /assets/shaders/custom.wgsl
#import "shaders/custom_material.wgsl"
[[stage(fragment)]]
fn fragment() -> [[location(0)]] vec4<f32> {
return get_color();
}
```
```rust
// /assets/shaders/custom_material.wgsl
[[block]]
struct CustomMaterial {
color: vec4<f32>;
};
[[group(1), binding(0)]]
var<uniform> material: CustomMaterial;
```
### Custom Path Imports
Enables defining custom import paths. These are intended to be used by crates to export shader functionality:
```rust
// bevy_pbr2/src/render/pbr.wgsl
#import bevy_pbr::mesh_view_bind_group
#import bevy_pbr::mesh_bind_group
[[block]]
struct StandardMaterial {
base_color: vec4<f32>;
emissive: vec4<f32>;
perceptual_roughness: f32;
metallic: f32;
reflectance: f32;
flags: u32;
};
/* rest of PBR fragment shader here */
```
```rust
impl Plugin for MeshRenderPlugin {
fn build(&self, app: &mut bevy_app::App) {
let mut shaders = app.world.get_resource_mut::<Assets<Shader>>().unwrap();
shaders.set_untracked(
MESH_BIND_GROUP_HANDLE,
Shader::from_wgsl(include_str!("mesh_bind_group.wgsl"))
.with_import_path("bevy_pbr::mesh_bind_group"),
);
shaders.set_untracked(
MESH_VIEW_BIND_GROUP_HANDLE,
Shader::from_wgsl(include_str!("mesh_view_bind_group.wgsl"))
.with_import_path("bevy_pbr::mesh_view_bind_group"),
);
```
By convention these should use rust-style module paths that start with the crate name. Ultimately we might enforce this convention.
Note that this feature implements _run time_ import resolution. Ultimately we should move the import logic into an asset preprocessor once Bevy gets support for that.
## Decouple Mesh Logic from PBR Logic via MeshRenderPlugin
This breaks out mesh rendering code from PBR material code, which improves the legibility of the code, decouples mesh logic from PBR logic, and opens the door for a future `MaterialPlugin<T: Material>` that handles all of the pipeline setup for arbitrary shader materials.
## Removed `RenderAsset<Shader>` in favor of extracting shaders into RenderPipelineCache
This simplifies the shader import implementation and removes the need to pass around `RenderAssets<Shader>`.
## RenderCommands are now fallible
This allows us to cleanly handle pipelines+shaders not being ready yet. We can abort a render command early in these cases, preventing bevy from trying to bind group / do draw calls for pipelines that couldn't be bound. This could also be used in the future for things like "components not existing on entities yet".
# Next Steps
* Investigate using Naga for "partial typed imports" (ex: `#import bevy_pbr::material::StandardMaterial`, which would import only the StandardMaterial struct)
* Implement `MaterialPlugin<T: Material>` for low-boilerplate custom material shaders
* Move shader import logic into the asset preprocessor once bevy gets support for that.
Fixes #3132
2021-11-18 03:45:02 +00:00
|
|
|
assert_eq!(MeshPipelineKey::from_msaa_samples(i).msaa_samples(), i);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|