Commit graph

332 commits

Author SHA1 Message Date
IceSentry
5134272dc9
Make FOG_ENABLED a shader_def instead of material flag (#13783)
# Objective

- If the fog is disabled it still generates a useless branch which can
hurt performance

## Solution

- Make the flag a shader_def instead

## Testing

- I tested enabling/disabling fog works as expected per-material in the
fog example
- I also tested that scenes that don't add the FogSettings resource
still work correctly

## Review notes

I'm not sure how to handle the removed material flag. Right now I just
commented it out and added a not to reuse it instead of creating a new
one.
2024-06-10 13:26:43 +00:00
Gagnus
298b01f10d
Adds back in way to convert color to u8 array, implemented for the two RGB color types, also renames Color::linear to Color::to_linear. (#13759)
# Objective

One thing missing from the new Color implementation in 0.14 is the
ability to easily convert to a u8 representation of the rgb color.

(note this is a redo of PR https://github.com/bevyengine/bevy/pull/13739
as I needed to move the source branch

## Solution

I have added to_u8_array and to_u8_array_no_alpha to a new trait called
ColorToPacked to mirror the f32 conversions in ColorToComponents and
implemented the new trait for Srgba and LinearRgba.
To go with those I also added matching from_u8... functions and
converted a couple of cases that used ad-hoc implementations of that
conversion to use these.
After discussion on Discord of the experience of using the API I renamed
Color::linear to Color::to_linear, as without that it looks like a
constructor (like Color::rgb).
I also added to_srgba which is the other commonly converted to type of
color (for UI and 2D) to match to_linear.
Removed a redundant extra implementation of to_f32_array for LinearColor
as it is also supplied in ColorToComponents (I'm surprised that's
allowed?)

## Testing

Ran all tests and manually tested.
Added to_and_from_u8 to linear_rgba::tests

## Changelog

visible change is Color::linear becomes Color::to_linear.

---------

Co-authored-by: John Payne <20407779+johngpayne@users.noreply.github.com>
2024-06-10 13:03:46 +00:00
JMS55
c50a4d8821
Remove unused mip_bias parameter from apply_normal_mapping (#13752)
Mip bias is no longer used here
2024-06-10 13:00:34 +00:00
IceSentry
f7ae277025
Use TBN in apply_normal_mapping in pbr_prepass (#13716)
# Objective

- apply_normal_mapping was changed to use TBN but the pbr_prepass was
not updated for that change

## Solution

- Update the pbr_prepass to correctly apply normal mapping
2024-06-06 19:04:30 +00:00
Patrick Walton
ad6872275f
Rename "point light" to "clusterable object" in cluster contexts. (#13654)
We want to use the clustering infrastructure for light probes and decals
as well, not just point lights. This patch builds on top of #13640 and
performs the rename.

To make this series easier to review, this patch makes no code changes.
Only identifiers and comments are modified.

## Migration Guide

* In the PBR shaders, `point_lights` is now known as
`clusterable_objects`, `PointLight` is now known as `ClusterableObject`,
and `cluster_light_index_lists` is now known as
`clusterable_object_index_lists`.
2024-06-04 11:01:13 +00:00
Patrick Walton
df8ccb8735
Implement PBR anisotropy per KHR_materials_anisotropy. (#13450)
This commit implements support for physically-based anisotropy in Bevy's
`StandardMaterial`, following the specification for the
[`KHR_materials_anisotropy`] glTF extension.

[*Anisotropy*] (not to be confused with [anisotropic filtering]) is a
PBR feature that allows roughness to vary along the tangent and
bitangent directions of a mesh. In effect, this causes the specular
light to stretch out into lines instead of a round lobe. This is useful
for modeling brushed metal, hair, and similar surfaces. Support for
anisotropy is a common feature in major game and graphics engines;
Unity, Unreal, Godot, three.js, and Blender all support it to varying
degrees.

Two new parameters have been added to `StandardMaterial`:
`anisotropy_strength` and `anisotropy_rotation`. Anisotropy strength,
which ranges from 0 to 1, represents how much the roughness differs
between the tangent and the bitangent of the mesh. In effect, it
controls how stretched the specular highlight is. Anisotropy rotation
allows the roughness direction to differ from the tangent of the model.

In addition to these two fixed parameters, an *anisotropy texture* can
be supplied. Such a texture should be a 3-channel RGB texture, where the
red and green values specify a direction vector using the same
conventions as a normal map ([0, 1] color values map to [-1, 1] vector
values), and the the blue value represents the strength. This matches
the format that the [`KHR_materials_anisotropy`] specification requires.
Such textures should be loaded as linear and not sRGB. Note that this
texture does consume one additional texture binding in the standard
material shader.

The glTF loader has been updated to properly parse the
`KHR_materials_anisotropy` extension.

A new example, `anisotropy`, has been added. This example loads and
displays the barn lamp example from the [`glTF-Sample-Assets`]
repository. Note that the textures were rather large, so I shrunk them
down and converted them to a mixture of JPEG and KTX2 format, in the
interests of saving space in the Bevy repository.

[*Anisotropy*]:
https://google.github.io/filament/Filament.md.html#materialsystem/anisotropicmodel

[anisotropic filtering]:
https://en.wikipedia.org/wiki/Anisotropic_filtering

[`KHR_materials_anisotropy`]:
https://github.com/KhronosGroup/glTF/blob/main/extensions/2.0/Khronos/KHR_materials_anisotropy/README.md

[`glTF-Sample-Assets`]:
https://github.com/KhronosGroup/glTF-Sample-Assets/

## Changelog

### Added

* Physically-based anisotropy is now available for materials, which
enhances the look of surfaces such as brushed metal or hair. glTF scenes
can use the new feature with the `KHR_materials_anisotropy` extension.

## Screenshots

With anisotropy:
![Screenshot 2024-05-20
233414](https://github.com/bevyengine/bevy/assets/157897/379f1e42-24e9-40b6-a430-f7d1479d0335)

Without anisotropy:
![Screenshot 2024-05-20
233420](https://github.com/bevyengine/bevy/assets/157897/aa220f05-b8e7-417c-9671-b242d4bf9fc4)
2024-06-03 23:46:06 +00:00
Ricky Taylor
9b9d3d81cb
Normalise matrix naming (#13489)
# Objective
- Fixes #10909
- Fixes #8492

## Solution
- Name all matrices `x_from_y`, for example `world_from_view`.

## Testing
- I've tested most of the 3D examples. The `lighting` example
particularly should hit a lot of the changes and appears to run fine.

---

## Changelog
- Renamed matrices across the engine to follow a `y_from_x` naming,
making the space conversion more obvious.

## Migration Guide
- `Frustum`'s `from_view_projection`, `from_view_projection_custom_far`
and `from_view_projection_no_far` were renamed to
`from_clip_from_world`, `from_clip_from_world_custom_far` and
`from_clip_from_world_no_far`.
- `ComputedCameraValues::projection_matrix` was renamed to
`clip_from_view`.
- `CameraProjection::get_projection_matrix` was renamed to
`get_clip_from_view` (this affects implementations on `Projection`,
`PerspectiveProjection` and `OrthographicProjection`).
- `ViewRangefinder3d::from_view_matrix` was renamed to
`from_world_from_view`.
- `PreviousViewData`'s members were renamed to `view_from_world` and
`clip_from_world`.
- `ExtractedView`'s `projection`, `transform` and `view_projection` were
renamed to `clip_from_view`, `world_from_view` and `clip_from_world`.
- `ViewUniform`'s `view_proj`, `unjittered_view_proj`,
`inverse_view_proj`, `view`, `inverse_view`, `projection` and
`inverse_projection` were renamed to `clip_from_world`,
`unjittered_clip_from_world`, `world_from_clip`, `world_from_view`,
`view_from_world`, `clip_from_view` and `view_from_clip`.
- `GpuDirectionalCascade::view_projection` was renamed to
`clip_from_world`.
- `MeshTransforms`' `transform` and `previous_transform` were renamed to
`world_from_local` and `previous_world_from_local`.
- `MeshUniform`'s `transform`, `previous_transform`,
`inverse_transpose_model_a` and `inverse_transpose_model_b` were renamed
to `world_from_local`, `previous_world_from_local`,
`local_from_world_transpose_a` and `local_from_world_transpose_b` (the
`Mesh` type in WGSL mirrors this, however `transform` and
`previous_transform` were named `model` and `previous_model`).
- `Mesh2dTransforms::transform` was renamed to `world_from_local`.
- `Mesh2dUniform`'s `transform`, `inverse_transpose_model_a` and
`inverse_transpose_model_b` were renamed to `world_from_local`,
`local_from_world_transpose_a` and `local_from_world_transpose_b` (the
`Mesh2d` type in WGSL mirrors this).
- In WGSL, in `bevy_pbr::mesh_functions`, `get_model_matrix` and
`get_previous_model_matrix` were renamed to `get_world_from_local` and
`get_previous_world_from_local`.
- In WGSL, `bevy_sprite::mesh2d_functions::get_model_matrix` was renamed
to `get_world_from_local`.
2024-06-03 16:56:53 +00:00
Patrick Walton
5c74c17c24
Move clustering-related types and functions into their own module. (#13640)
As a prerequisite for decals and clustering of light probes, we want
clustering to operate on objects other than lights. (Currently, it only
operates on point and spot lights.) This necessitates a large
refactoring, so I'm splitting it up into small steps.

The first such step is to separate clustering from lighting by moving
clustering-related types and functions out of lighting and into their
own module subtree within the `bevy_pbr` crate. (Ultimately, we may want
to move it to `bevy_render`, but that requires more work and can be a
followup.)

No code changes have been made other than adjusting import lists and
moving code. This is to make this code easy to review. Ultimately, I
want to rename "light" to "clusterable object" in most cases, but doing
that at the same time as moving the code would make reviewing harder. So
instead I'm moving the code first and will follow this up with renaming.

## Migration Guide

* Clustering-related types and functions (e.g.
`assign_lights_to_clusters`) have moved under `bevy_pbr::cluster`, in
preparation for the ability to cluster objects other than lights.
2024-06-03 15:05:48 +00:00
Patrick Walton
be053b1d7c
Implement motion vectors and TAA for skinned meshes and meshes with morph targets. (#13572)
This is a revamped equivalent to #9902, though it shares none of the
code. It handles all special cases that I've tested correctly.

The overall technique consists of double-buffering the joint matrix and
morph weights buffers, as most of the previous attempts to solve this
problem did. The process is generally straightforward. Note that, to
avoid regressing the ability of mesh extraction, skin extraction, and
morph target extraction to run in parallel, I had to add a new system to
rendering, `set_mesh_motion_vector_flags`. The comment there explains
the details; it generally runs very quickly.

I've tested this with modified versions of the `animated_fox`,
`morph_targets`, and `many_foxes` examples that add TAA, and the patch
works. To avoid bloating those examples, I didn't add switches for TAA
to them.

Addresses points (1) and (2) of #8423.

## Changelog

### Fixed

* Motion vectors, and therefore TAA, are now supported for meshes with
skins and/or morph targets.
2024-05-31 17:02:28 +00:00
arcashka
cdc605cc48
add tonemapping LUT bindings for sprite and mesh2d pipelines (#13262)
Fixes #13118
If you use `Sprite` or `Mesh2d` and create `Camera` with
* hdr=false
* any tonemapper

You would get
```
wgpu error: Validation Error

Caused by:
    In Device::create_render_pipeline
      note: label = `sprite_pipeline`
    Error matching ShaderStages(FRAGMENT) shader requirements against the pipeline
    Shader global ResourceBinding { group: 0, binding: 19 } is not available in the pipeline layout
    Binding is missing from the pipeline layout
```
Because of missing tonemapping LUT bindings 

## Solution
Add missing bindings for tonemapping LUT's to `SpritePipeline` &
`Mesh2dPipeline`

## Testing
I checked that
* `tonemapping`
* `color_grading`
* `sprite_animations`
* `2d_shapes`
* `meshlet`
* `deferred_rendering`
examples are still working

2d cases I checked with this code:
```
use bevy::{
    color::palettes::css::PURPLE, core_pipeline::tonemapping::Tonemapping, prelude::*,
    sprite::MaterialMesh2dBundle,
};

fn main() {
    App::new()
        .add_plugins(DefaultPlugins)
        .add_systems(Startup, setup)
        .add_systems(Update, toggle_tonemapping_method)
        .run();
}

fn setup(
    mut commands: Commands,
    mut meshes: ResMut<Assets<Mesh>>,
    mut materials: ResMut<Assets<ColorMaterial>>,
    asset_server: Res<AssetServer>,
) {
    commands.spawn(Camera2dBundle {
        camera: Camera {
            hdr: false,
            ..default()
        },
        tonemapping: Tonemapping::BlenderFilmic,
        ..default()
    });
    commands.spawn(MaterialMesh2dBundle {
        mesh: meshes.add(Rectangle::default()).into(),
        transform: Transform::default().with_scale(Vec3::splat(128.)),
        material: materials.add(Color::from(PURPLE)),
        ..default()
    });

    commands.spawn(SpriteBundle {
        texture: asset_server.load("asd.png"),
        ..default()
    });
}

fn toggle_tonemapping_method(
    keys: Res<ButtonInput<KeyCode>>,
    mut tonemapping: Query<&mut Tonemapping>,
) {
    let mut method = tonemapping.single_mut();

    if keys.just_pressed(KeyCode::Digit1) {
        *method = Tonemapping::None;
    } else if keys.just_pressed(KeyCode::Digit2) {
        *method = Tonemapping::Reinhard;
    } else if keys.just_pressed(KeyCode::Digit3) {
        *method = Tonemapping::ReinhardLuminance;
    } else if keys.just_pressed(KeyCode::Digit4) {
        *method = Tonemapping::AcesFitted;
    } else if keys.just_pressed(KeyCode::Digit5) {
        *method = Tonemapping::AgX;
    } else if keys.just_pressed(KeyCode::Digit6) {
        *method = Tonemapping::SomewhatBoringDisplayTransform;
    } else if keys.just_pressed(KeyCode::Digit7) {
        *method = Tonemapping::TonyMcMapface;
    } else if keys.just_pressed(KeyCode::Digit8) {
        *method = Tonemapping::BlenderFilmic;
    }
}
```
---

## Changelog
Fix the bug which led to the crash when user uses any tonemapper without
hdr for rendering sprites and 2d meshes.
2024-05-28 12:09:26 +00:00
Patrick Walton
f398674e51
Implement opt-in sharp screen-space reflections for the deferred renderer, with improved raymarching code. (#13418)
This commit, a revamp of #12959, implements screen-space reflections
(SSR), which approximate real-time reflections based on raymarching
through the depth buffer and copying samples from the final rendered
frame. This patch is a relatively minimal implementation of SSR, so as
to provide a flexible base on which to customize and build in the
future. However, it's based on the production-quality [raymarching code
by Tomasz
Stachowiak](https://gist.github.com/h3r2tic/9c8356bdaefbe80b1a22ae0aaee192db).

For a general basic overview of screen-space reflections, see
[1](https://lettier.github.io/3d-game-shaders-for-beginners/screen-space-reflection.html).
The raymarching shader uses the basic algorithm of tracing forward in
large steps, refining that trace in smaller increments via binary
search, and then using the secant method. No temporal filtering or
roughness blurring, is performed at all; for this reason, SSR currently
only operates on very shiny surfaces. No acceleration via the
hierarchical Z-buffer is implemented (though note that
https://github.com/bevyengine/bevy/pull/12899 will add the
infrastructure for this). Reflections are traced at full resolution,
which is often considered slow. All of these improvements and more can
be follow-ups.

SSR is built on top of the deferred renderer and is currently only
supported in that mode. Forward screen-space reflections are possible
albeit uncommon (though e.g. *Doom Eternal* uses them); however, they
require tracing from the previous frame, which would add complexity.
This patch leaves the door open to implementing SSR in the forward
rendering path but doesn't itself have such an implementation.
Screen-space reflections aren't supported in WebGL 2, because they
require sampling from the depth buffer, which Naga can't do because of a
bug (`sampler2DShadow` is incorrectly generated instead of `sampler2D`;
this is the same reason why depth of field is disabled on that
platform).

To add screen-space reflections to a camera, use the
`ScreenSpaceReflectionsBundle` bundle or the
`ScreenSpaceReflectionsSettings` component. In addition to
`ScreenSpaceReflectionsSettings`, `DepthPrepass` and `DeferredPrepass`
must also be present for the reflections to show up. The
`ScreenSpaceReflectionsSettings` component contains several settings
that artists can tweak, and also comes with sensible defaults.

A new example, `ssr`, has been added. It's loosely based on the
[three.js ocean
sample](https://threejs.org/examples/webgl_shaders_ocean.html), but all
the assets are original. Note that the three.js demo has no screen-space
reflections and instead renders a mirror world. In contrast to #12959,
this demo tests not only a cube but also a more complex model (the
flight helmet).

## Changelog

### Added

* Screen-space reflections can be enabled for very smooth surfaces by
adding the `ScreenSpaceReflections` component to a camera. Deferred
rendering must be enabled for the reflections to appear.

![Screenshot 2024-05-18
143555](https://github.com/bevyengine/bevy/assets/157897/b8675b39-8a89-433e-a34e-1b9ee1233267)

![Screenshot 2024-05-18
143606](https://github.com/bevyengine/bevy/assets/157897/cc9e1cd0-9951-464a-9a08-e589210e5606)
2024-05-27 13:43:40 +00:00
Patrick Walton
9da0b2a0ec
Make render phases render world resources instead of components. (#13277)
This commit makes us stop using the render world ECS for
`BinnedRenderPhase` and `SortedRenderPhase` and instead use resources
with `EntityHashMap`s inside. There are three reasons to do this:

1. We can use `clear()` to clear out the render phase collections
instead of recreating the components from scratch, allowing us to reuse
allocations.

2. This is a prerequisite for retained bins, because components can't be
retained from frame to frame in the render world, but resources can.

3. We want to move away from storing anything in components in the
render world ECS, and this is a step in that direction.

This patch results in a small performance benefit, due to point (1)
above.

## Changelog

### Changed

* The `BinnedRenderPhase` and `SortedRenderPhase` render world
components have been replaced with `ViewBinnedRenderPhases` and
`ViewSortedRenderPhases` resources.

## Migration Guide

* The `BinnedRenderPhase` and `SortedRenderPhase` render world
components have been replaced with `ViewBinnedRenderPhases` and
`ViewSortedRenderPhases` resources. Instead of querying for the
components, look the camera entity up in the
`ViewBinnedRenderPhases`/`ViewSortedRenderPhases` tables.
2024-05-21 18:23:04 +00:00
IceSentry
bf2aced279
Remove another .view_layouts (#13410)
I forgot to save that file when submitting #13394 😅
2024-05-19 00:08:27 +00:00
Patrick Walton
846757cb38
Make the prepass shader compile when lightmaps are present. (#13402)
Commit 3f5a090b1b added a reference to
`STANDARD_MATERIAL_FLAGS_BASE_COLOR_UV_BIT`, a nonexistent identifier,
in the alpha discard portion of the prepass shader. Moreover, the logic
didn't make sense to me. I think the code was trying to choose between
the two UV sets depending on which is present, so I made it do that.

I noticed this when trying Bistro with #13277. I'm not sure why this
issue didn't manifest itself before, but it's clearly a bug, so here's a
fix. We should probably merge this before 0.14.
2024-05-18 22:28:31 +00:00
Johannes Hackel
1fcf6a444f
Add emissive_exposure_weight to the StandardMaterial (#13350)
# Objective

- The emissive color gets multiplied by the camera exposure value. But
this cancels out almost any emissive effect.
- Fixes #13133
- Closes PR #13337 

## Solution
- Add emissive_exposure_weight to the StandardMaterial
- In the shader this value is stored in the alpha channel of the
emissive color.
- This value defines how much the exposure influences the emissive
color.
- It's equal to Google's Filament:
https://google.github.io/filament/Materials.html#emissive

4f021583f1/shaders/src/shading_lit.fs (L287)

## Testing

- The result of
[EmissiveStrengthTest](https://github.com/KhronosGroup/glTF-Sample-Models/tree/main/2.0/EmissiveStrengthTest)
with the default value of 0.0:

without bloom:

![emissive_fix](https://github.com/bevyengine/bevy/assets/688816/8f8c131a-464a-4d7b-a9e4-4e28d679ee5d)

with bloom:

![emissive_fix_bloom](https://github.com/bevyengine/bevy/assets/688816/89f200ee-3bd5-4daa-bf64-8999b56df3fa)
2024-05-17 13:49:53 +00:00
IceSentry
aa907d5437
Remove unnecessary .view_layouts (#13394)
# Objective

- The volumetric fog PR originally needed to be modified to use
`.view_layouts` but that was changed in another PR. The merge with main
still kept those around.

## Solution

- Remove them because they aren't necessary
2024-05-16 19:12:36 +00:00
Patrick Walton
19bfa41768
Implement volumetric fog and volumetric lighting, also known as light shafts or god rays. (#13057)
This commit implements a more physically-accurate, but slower, form of
fog than the `bevy_pbr::fog` module does. Notably, this *volumetric fog*
allows for light beams from directional lights to shine through,
creating what is known as *light shafts* or *god rays*.

To add volumetric fog to a scene, add `VolumetricFogSettings` to the
camera, and add `VolumetricLight` to directional lights that you wish to
be volumetric. `VolumetricFogSettings` has numerous settings that allow
you to define the accuracy of the simulation, as well as the look of the
fog. Currently, only interaction with directional lights that have
shadow maps is supported. Note that the overhead of the effect scales
directly with the number of directional lights in use, so apply
`VolumetricLight` sparingly for the best results.

The overall algorithm, which is implemented as a postprocessing effect,
is a combination of the techniques described in [Scratchapixel] and
[this blog post]. It uses raymarching in screen space, transformed into
shadow map space for sampling and combined with physically-based
modeling of absorption and scattering. Bevy employs the widely-used
[Henyey-Greenstein phase function] to model asymmetry; this essentially
allows light shafts to fade into and out of existence as the user views
them.

Volumetric rendering is a huge subject, and I deliberately kept the
scope of this commit small. Possible follow-ups include:

1. Raymarching at a lower resolution.

2. A post-processing blur (especially useful when combined with (1)).

3. Supporting point lights and spot lights.

4. Supporting lights with no shadow maps.

5. Supporting irradiance volumes and reflection probes.

6. Voxel components that reuse the volumetric fog code to create voxel
shapes.

7. *Horizon: Zero Dawn*-style clouds.

These are all useful, but out of scope of this patch for now, to keep
things tidy and easy to review.

A new example, `volumetric_fog`, has been added to demonstrate the
effect.

## Changelog

### Added

* A new component, `VolumetricFog`, is available, to allow for a more
physically-accurate, but more resource-intensive, form of fog.

* A new component, `VolumetricLight`, can be placed on directional
lights to make them interact with `VolumetricFog`. Notably, this allows
such lights to emit light shafts/god rays.

![Screenshot 2024-04-21
162808](https://github.com/bevyengine/bevy/assets/157897/7a1fc81d-eed5-4735-9419-286c496391a9)

![Screenshot 2024-04-21
132005](https://github.com/bevyengine/bevy/assets/157897/e6d3b5ca-8f59-488d-a3de-15e95aaf4995)

[Scratchapixel]:
https://www.scratchapixel.com/lessons/3d-basic-rendering/volume-rendering-for-developers/intro-volume-rendering.html

[this blog post]: https://www.alexandre-pestana.com/volumetric-lights/

[Henyey-Greenstein phase function]:
https://www.pbr-book.org/4ed/Volume_Scattering/Phase_Functions#TheHenyeyndashGreensteinPhaseFunction
2024-05-16 17:13:18 +00:00
charlotte
4c3b7679ec
#12502 Remove limit on RenderLayers. (#13317)
# Objective

Remove the limit of `RenderLayer` by using a growable mask using
`SmallVec`.

Changes adopted from @UkoeHB's initial PR here
https://github.com/bevyengine/bevy/pull/12502 that contained additional
changes related to propagating render layers.

Changes

## Solution

The main thing needed to unblock this is removing `RenderLayers` from
our shader code. This primarily affects `DirectionalLight`. We are now
computing a `skip` field on the CPU that is then used to skip the light
in the shader.

## Testing

Checked a variety of examples and did a quick benchmark on `many_cubes`.
There were some existing problems identified during the development of
the original pr (see:
https://discord.com/channels/691052431525675048/1220477928605749340/1221190112939872347).
This PR shouldn't change any existing behavior besides removing the
layer limit (sans the comment in migration about `all` layers no longer
being possible).

---

## Changelog

Removed the limit on `RenderLayers` by using a growable bitset that only
allocates when layers greater than 64 are used.

## Migration Guide

- `RenderLayers::all()` no longer exists. Entities expecting to be
visible on all layers, e.g. lights, should compute the active layers
that are in use.

---------

Co-authored-by: robtfm <50659922+robtfm@users.noreply.github.com>
2024-05-16 16:15:47 +00:00
Johannes Hackel
1efa578ffb
Fix transmission by setting the correct value for transmissive_lighting_input.F_ab (#13379)
# Objective

- The clearcoat PR #13031 had a small typo which broke transmission
- Fixes #13284

## Solution

- Set transmissive_lighting_input.F_ab to the correct value


![transmission_fix](https://github.com/bevyengine/bevy/assets/688816/92158117-de3a-4fa5-8af8-dcbd1d5eee04)
2024-05-16 14:33:32 +00:00
Griffin
519ed5de42
Apply uv transform in the prepass (#13250)
# Objective

- The UV transform was applied in the main pass but not the prepass.

## Solution

- Apply the UV transform in the prepass.

## Testing

- The normals in my scene now look correct when using the prepass.
2024-05-13 22:33:09 +00:00
Johannes Hackel
3f5a090b1b
Add UV channel selection to StandardMaterial (#13200)
# Objective

- The StandardMaterial always uses ATTRIBUTE_UV_0 for each texture
except lightmap. This is not flexible enough for a lot of gltf Files.
- Fixes #12496
- Fixes #13086
- Fixes #13122
- Closes #13153

## Solution

- The StandardMaterial gets extended for each texture by an UvChannel
enum. It defaults to Uv0 but can also be set to Uv1.
- The gltf loader now handles the texcoord information. If the texcoord
is not supported it creates a warning.
- It uses StandardMaterial shader defs to define which attribute to use.

## Testing

This fixes #12496 for example:

![wall_fixed](https://github.com/bevyengine/bevy/assets/688816/bc37c9e1-72ba-4e59-b092-5ee10dade603)

For testing of all kind of textures I used the TextureTransformMultiTest
from
https://github.com/KhronosGroup/glTF-Sample-Assets/tree/main/Models/TextureTransformMultiTest
Its purpose is to test multiple texture transfroms but it is also a good
test for different texcoords.
It also shows the issue with emission #13133.

Before:

![TextureTransformMultiTest_main](https://github.com/bevyengine/bevy/assets/688816/aa701d04-5a3f-4df1-a65f-fc770ab6f4ab)

After:

![TextureTransformMultiTest_texcoord](https://github.com/bevyengine/bevy/assets/688816/c3f91943-b830-4068-990f-e4f2c97771ee)
2024-05-13 18:23:09 +00:00
François Mockers
173db7726f
remove unused warnings in release (#13344)
# Objective

- When building for release, there are "unused" warnings:
```
warning: unused import: `bevy_utils::warn_once`
  --> crates/bevy_pbr/src/render/mesh_view_bindings.rs:32:5
   |
32 | use bevy_utils::warn_once;
   |     ^^^^^^^^^^^^^^^^^^^^^
   |
   = note: `#[warn(unused_imports)]` on by default

warning: unused variable: `texture_count`
   --> crates/bevy_pbr/src/render/mesh_view_bindings.rs:371:17
    |
371 |             let texture_count: usize = entries
    |                 ^^^^^^^^^^^^^ help: if this is intentional, prefix it with an underscore: `_texture_count`
    |
    = note: `#[warn(unused_variables)]` on by default
```

## Solution

- Gate the import and definition by the same cfg as their uses
2024-05-12 22:30:34 +00:00
moonlightaria
3f2cc244d7
Add color conversions #13224 (#13276)
# Objective
fixes #13224
adds conversions for Vec3 and Vec4 since these appear so often

## Solution
added Covert trait (couldn't think of good name) for [f32; 4], [f32, 3],
Vec4, and Vec3 along with the symmetric implementation

## Changelog
added conversions between arrays and vector to colors and vice versa

#migration
LinearRgba appears to have already had implicit conversions for [f32;4]
and Vec4
2024-05-09 18:01:52 +00:00
Patrick Walton
0dddfa07ab
Fix the WebGL 2 backend by giving the visibility_ranges array a fixed length. (#13210)
WebGL 2 doesn't support variable-length uniform buffer arrays. So we
arbitrarily set the length of the visibility ranges field to 64 on that
platform.

---------

Co-authored-by: IceSentry <c.giguere42@gmail.com>
2024-05-08 07:34:59 +00:00
IceSentry
4737106bdd
Extract mesh view layouts logic (#13266)
Copied almost verbatim from the volumetric fog PR

# Objective

- Managing mesh view layouts is complicated

## Solution

- Extract it to it's own struct
- This was done as part of #13057 and is copied almost verbatim. I
wanted to keep this part of the PR it's own atomic commit in case we
ever have to revert fog or run a bisect. This change is good whether or
not we have volumetric fog.

Co-Authored-By: @pcwalton
2024-05-07 06:46:41 +00:00
Fpgu
60a73fa60b
Use Dir3 for local axis methods in GlobalTransform (#13264)
Switched the return type from `Vec3` to `Dir3` for directional axis
methods within the `GlobalTransform` component.

## Migration Guide
The `GlobalTransform` component's directional axis methods (e.g.,
`right()`, `left()`, `up()`, `down()`, `back()`, `forward()`) have been
updated from returning `Vec3` to `Dir3`.
2024-05-06 20:52:05 +00:00
Patrick Walton
59b52fc94e
Modulate the emissive texture by the emissive color again. (#13251)
Fixes a regression introduced by #13031.
2024-05-06 20:06:10 +00:00
Patrick Walton
77ed72bc16
Implement clearcoat per the Filament and the KHR_materials_clearcoat specifications. (#13031)
Clearcoat is a separate material layer that represents a thin
translucent layer of a material. Examples include (from the [Filament
spec]) car paint, soda cans, and lacquered wood. This commit implements
support for clearcoat following the Filament and Khronos specifications,
marking the beginnings of support for multiple PBR layers in Bevy.

The [`KHR_materials_clearcoat`] specification describes the clearcoat
support in glTF. In Blender, applying a clearcoat to the Principled BSDF
node causes the clearcoat settings to be exported via this extension. As
of this commit, Bevy parses and reads the extension data when present in
glTF. Note that the `gltf` crate has no support for
`KHR_materials_clearcoat`; this patch therefore implements the JSON
semantics manually.

Clearcoat is integrated with `StandardMaterial`, but the code is behind
a series of `#ifdef`s that only activate when clearcoat is present.
Additionally, the `pbr_feature_layer_material_textures` Cargo feature
must be active in order to enable support for clearcoat factor maps,
clearcoat roughness maps, and clearcoat normal maps. This approach
mirrors the same pattern used by the existing transmission feature and
exists to avoid running out of texture bindings on platforms like WebGL
and WebGPU. Note that constant clearcoat factors and roughness values
*are* supported in the browser; only the relatively-less-common maps are
disabled on those platforms.

This patch refactors the lighting code in `StandardMaterial`
significantly in order to better support multiple layers in a natural
way. That code was due for a refactor in any case, so this is a nice
improvement.

A new demo, `clearcoat`, has been added. It's based on [the
corresponding three.js demo], but all the assets (aside from the skybox
and environment map) are my original work.

[Filament spec]:
https://google.github.io/filament/Filament.html#materialsystem/clearcoatmodel

[`KHR_materials_clearcoat`]:
https://github.com/KhronosGroup/glTF/blob/main/extensions/2.0/Khronos/KHR_materials_clearcoat/README.md

[the corresponding three.js demo]:
https://threejs.org/examples/webgl_materials_physical_clearcoat.html

![Screenshot 2024-04-19
101143](https://github.com/bevyengine/bevy/assets/157897/3444bcb5-5c20-490c-b0ad-53759bd47ae2)

![Screenshot 2024-04-19
102054](https://github.com/bevyengine/bevy/assets/157897/6e953944-75b8-49ef-bc71-97b0a53b3a27)

## Changelog

### Added

* `StandardMaterial` now supports a clearcoat layer, which represents a
thin translucent layer over an underlying material.
* The glTF loader now supports the `KHR_materials_clearcoat` extension,
representing materials with clearcoat layers.

## Migration Guide

* The lighting functions in the `pbr_lighting` WGSL module now have
clearcoat parameters, if `STANDARD_MATERIAL_CLEARCOAT` is defined.

* The `R` reflection vector parameter has been removed from some
lighting functions, as it was unused.
2024-05-05 22:57:05 +00:00
arcashka
6027890a11
move wgsl color operations from bevy_pbr to bevy_render (#13209)
# Objective

`bevy_pbr/utils.wgsl` shader file contains mathematical constants and
color conversion functions. Both of those should be accessible without
enabling `bevy_pbr` feature. For example, tonemapping can be done in non
pbr scenario, and it uses color conversion functions.

Fixes #13207

## Solution

* Move mathematical constants (such as PI, E) from
`bevy_pbr/src/render/utils.wgsl` into `bevy_render/src/maths.wgsl`
* Move color conversion functions from `bevy_pbr/src/render/utils.wgsl`
into new file `bevy_render/src/color_operations.wgsl`

## Testing
Ran multiple examples, checked they are working:
* tonemapping
* color_grading
* 3d_scene
* animated_material
* deferred_rendering
* 3d_shapes
* fog
* irradiance_volumes
* meshlet
* parallax_mapping
* pbr
* reflection_probes
* shadow_biases
* 2d_gizmos
* light_gizmos
---

## Changelog
* Moved mathematical constants (such as PI, E) from
`bevy_pbr/src/render/utils.wgsl` into `bevy_render/src/maths.wgsl`
* Moved color conversion functions from `bevy_pbr/src/render/utils.wgsl`
into new file `bevy_render/src/color_operations.wgsl`

## Migration Guide
In user's shader code replace usage of mathematical constants from
`bevy_pbr::utils` to the usage of the same constants from
`bevy_render::maths`.
2024-05-04 10:30:23 +00:00
Kristoffer Søholm
2089a28717
Add BufferVec, an higher-performance alternative to StorageBuffer, and make GpuArrayBuffer use it. (#13199)
This is an adoption of #12670 plus some documentation fixes. See that PR
for more details.

---

## Changelog

* Renamed `BufferVec` to `RawBufferVec` and added a new `BufferVec`
type.

## Migration Guide
`BufferVec` has been renamed to `RawBufferVec` and a new similar type
has taken the `BufferVec` name.

---------

Co-authored-by: Patrick Walton <pcwalton@mimiga.net>
Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
Co-authored-by: IceSentry <IceSentry@users.noreply.github.com>
2024-05-03 11:39:21 +00:00
Patrick Walton
31835ff76d
Implement visibility ranges, also known as hierarchical levels of detail (HLODs). (#12916)
Implement visibility ranges, also known as hierarchical levels of detail
(HLODs).

This commit introduces a new component, `VisibilityRange`, which allows
developers to specify camera distances in which meshes are to be shown
and hidden. Hiding meshes happens early in the rendering pipeline, so
this feature can be used for level of detail optimization. Additionally,
this feature is properly evaluated per-view, so different views can show
different levels of detail.

This feature differs from proper mesh LODs, which can be implemented
later. Engines generally implement true mesh LODs later in the pipeline;
they're typically more efficient than HLODs with GPU-driven rendering.
However, mesh LODs are more limited than HLODs, because they require the
lower levels of detail to be meshes with the same vertex layout and
shader (and perhaps the same material) as the original mesh. Games often
want to use objects other than meshes to replace distant models, such as
*octahedral imposters* or *billboard imposters*.

The reason why the feature is called *hierarchical level of detail* is
that HLODs can replace multiple meshes with a single mesh when the
camera is far away. This can be useful for reducing drawcall count. Note
that `VisibilityRange` doesn't automatically propagate down to children;
it must be placed on every mesh.

Crossfading between different levels of detail is supported, using the
standard 4x4 ordered dithering pattern from [1]. The shader code to
compute the dithering patterns should be well-optimized. The dithering
code is only active when visibility ranges are in use for the mesh in
question, so that we don't lose early Z.

Cascaded shadow maps show the HLOD level of the view they're associated
with. Point light and spot light shadow maps, which have no CSMs,
display all HLOD levels that are visible in any view. To support this
efficiently and avoid doing visibility checks multiple times, we
precalculate all visible HLOD levels for each entity with a
`VisibilityRange` during the `check_visibility_range` system.

A new example, `visibility_range`, has been added to the tree, as well
as a new low-poly version of the flight helmet model to go with it. It
demonstrates use of the visibility range feature to provide levels of
detail.

[1]: https://en.wikipedia.org/wiki/Ordered_dithering#Threshold_map

[^1]: Unreal doesn't have a feature that exactly corresponds to
visibility ranges, but Unreal's HLOD system serves roughly the same
purpose.

## Changelog

### Added

* A new `VisibilityRange` component is available to conditionally enable
entity visibility at camera distances, with optional crossfade support.
This can be used to implement different levels of detail (LODs).

## Screenshots

High-poly model:
![Screenshot 2024-04-09
185541](https://github.com/bevyengine/bevy/assets/157897/7e8be017-7187-4471-8866-974e2d8f2623)

Low-poly model up close:
![Screenshot 2024-04-09
185546](https://github.com/bevyengine/bevy/assets/157897/429603fe-6bb7-4246-8b4e-b4888fd1d3a0)

Crossfading between the two:
![Screenshot 2024-04-09
185604](https://github.com/bevyengine/bevy/assets/157897/86d0d543-f8f3-49ec-8fe5-caa4d0784fd4)

---------

Co-authored-by: Carter Anderson <mcanders1@gmail.com>
2024-05-03 00:11:35 +00:00
Patrick Walton
961b24deaf
Implement filmic color grading. (#13121)
This commit expands Bevy's existing tonemapping feature to a complete
set of filmic color grading tools, matching those of engines like Unity,
Unreal, and Godot. The following features are supported:

* White point adjustment. This is inspired by Unity's implementation of
the feature, but simplified and optimized. *Temperature* and *tint*
control the adjustments to the *x* and *y* chromaticity values of [CIE
1931]. Following Unity, the adjustments are made relative to the [D65
standard illuminant] in the [LMS color space].

* Hue rotation. This simply converts the RGB value to [HSV], alters the
hue, and converts back.

* Color correction. This allows the *gamma*, *gain*, and *lift* values
to be adjusted according to the standard [ASC CDL combined function].

* Separate color correction for shadows, midtones, and highlights.
Blender's source code was used as a reference for the implementation of
this. The midtone ranges can be adjusted by the user. To avoid abrupt
color changes, a small crossfade is used between the different sections
of the image, again following Blender's formulas.

A new example, `color_grading`, has been added, offering a GUI to change
all the color grading settings. It uses the same test scene as the
existing `tonemapping` example, which has been factored out into a
shared glTF scene.

[CIE 1931]: https://en.wikipedia.org/wiki/CIE_1931_color_space

[D65 standard illuminant]:
https://en.wikipedia.org/wiki/Standard_illuminant#Illuminant_series_D

[LMS color space]: https://en.wikipedia.org/wiki/LMS_color_space

[HSV]: https://en.wikipedia.org/wiki/HSL_and_HSV

[ASC CDL combined function]:
https://en.wikipedia.org/wiki/ASC_CDL#Combined_Function

## Changelog

### Added

* Many new filmic color grading options have been added to the
`ColorGrading` component.

## Migration Guide

* `ColorGrading::gamma` and `ColorGrading::pre_saturation` are now set
separately for the `shadows`, `midtones`, and `highlights` sections. You
can migrate code with the `ColorGrading::all_sections` and
`ColorGrading::all_sections_mut` functions, which access and/or update
all sections at once.
* `ColorGrading::post_saturation` and `ColorGrading::exposure` are now
fields of `ColorGrading::global`.

## Screenshots

![Screenshot 2024-04-27
143144](https://github.com/bevyengine/bevy/assets/157897/c1de5894-917d-4101-b5c9-e644d141a941)

![Screenshot 2024-04-27
143216](https://github.com/bevyengine/bevy/assets/157897/da393c8a-d747-42f5-b47c-6465044c788d)
2024-05-02 12:18:59 +00:00
Patrick Walton
16531fb3e3
Implement GPU frustum culling. (#12889)
This commit implements opt-in GPU frustum culling, built on top of the
infrastructure in https://github.com/bevyengine/bevy/pull/12773. To
enable it on a camera, add the `GpuCulling` component to it. To
additionally disable CPU frustum culling, add the `NoCpuCulling`
component. Note that adding `GpuCulling` without `NoCpuCulling`
*currently* does nothing useful. The reason why `GpuCulling` doesn't
automatically imply `NoCpuCulling` is that I intend to follow this patch
up with GPU two-phase occlusion culling, and CPU frustum culling plus
GPU occlusion culling seems like a very commonly-desired mode.

Adding the `GpuCulling` component to a view puts that view into
*indirect mode*. This mode makes all drawcalls indirect, relying on the
mesh preprocessing shader to allocate instances dynamically. In indirect
mode, the `PreprocessWorkItem` `output_index` points not to a
`MeshUniform` instance slot but instead to a set of `wgpu`
`IndirectParameters`, from which it allocates an instance slot
dynamically if frustum culling succeeds. Batch building has been updated
to allocate and track indirect parameter slots, and the AABBs are now
supplied to the GPU as `MeshCullingData`.

A small amount of code relating to the frustum culling has been borrowed
from meshlets and moved into `maths.wgsl`. Note that standard Bevy
frustum culling uses AABBs, while meshlets use bounding spheres; this
means that not as much code can be shared as one might think.

This patch doesn't provide any way to perform GPU culling on shadow
maps, to avoid making this patch bigger than it already is. That can be
a followup.

## Changelog

### Added

* Frustum culling can now optionally be done on the GPU. To enable it,
add the `GpuCulling` component to a camera.
* To disable CPU frustum culling, add `NoCpuCulling` to a camera. Note
that `GpuCulling` doesn't automatically imply `NoCpuCulling`.
2024-04-28 12:50:00 +00:00
re0312
92928f13ed
Cleanup extract_meshes (#13026)
# Objective

- clean up extract_mesh_(gpu/cpu)_building

## Solution

- gpu_building no need to hold  `prev_render_mesh_instances`
- using `insert_unique_unchecked` instead of simple insert as we know
all entities are unique
- direcly get `previous_input_index ` in par_loop 


## Performance
this should also bring a slight performance win.

cargo run --release --example many_cubes --features bevy/trace_tracy --
--no-frustum-culling
`extract_meshes_for_gpu_building`


![image](https://github.com/bevyengine/bevy/assets/45868716/a5425e8a-258b-482d-afda-170363ee6479)

---------

Co-authored-by: Patrick Walton <pcwalton@mimiga.net>
2024-04-26 23:49:32 +00:00
JMS55
17633c1f75
Remove unused push constants (#13076)
The shader code was removed in #11280, but we never cleaned up the rust
code.
2024-04-23 21:43:46 +00:00
re0312
0f27500e46
Improve par_iter and Parallel (#12904)
# Objective

- bevy usually use `Parallel::scope` to collect items from `par_iter`,
but `scope` will be called with every satifified items. it will cause a
lot of unnecessary lookup.

## Solution

- similar to Rayon ,we introduce `for_each_init` for `par_iter` which
only be invoked when spawn a task for a group of items.

---

## Changelog

- added  `for_each_init`

## Performance
`check_visibility `  in  `many_foxes ` 

![image](https://github.com/bevyengine/bevy/assets/45868716/030c41cf-0d2f-4a36-a071-35097d93e494)
 
~40% performance gain in `check_visibility`.

---------

Co-authored-by: James Liu <contact@jamessliu.com>
2024-04-23 12:05:34 +00:00
François Mockers
c40b485095
use a u64 for MeshPipelineKey (#13015)
# Objective

- `MeshPipelineKey` use some bits for two things
- First commit in this PR adds an assertion that doesn't work currently
on main
- This leads to some mesh topology not working anymore, for example
`LineStrip`
- With examples `lines`, there should be two groups of lines, the blue
one doesn't display currently

## Solution

- Change the `MeshPipelineKey` to be backed by a `u64` instead, to have
enough bits
2024-04-21 20:01:45 +00:00
Patrick Walton
1141e731ff
Implement alpha to coverage (A2C) support. (#12970)
[Alpha to coverage] (A2C) replaces alpha blending with a
hardware-specific multisample coverage mask when multisample
antialiasing is in use. It's a simple form of [order-independent
transparency] that relies on MSAA. ["Anti-aliased Alpha Test: The
Esoteric Alpha To Coverage"] is a good summary of the motivation for and
best practices relating to A2C.

This commit implements alpha to coverage support as a new variant for
`AlphaMode`. You can supply `AlphaMode::AlphaToCoverage` as the
`alpha_mode` field in `StandardMaterial` to use it. When in use, the
standard material shader automatically applies the texture filtering
method from ["Anti-aliased Alpha Test: The Esoteric Alpha To Coverage"].
Objects with alpha-to-coverage materials are binned in the opaque pass,
as they're fully order-independent.

The `transparency_3d` example has been updated to feature an object with
alpha to coverage. Happily, the example was already using MSAA.

This is part of #2223, as far as I can tell.

[Alpha to coverage]: https://en.wikipedia.org/wiki/Alpha_to_coverage

[order-independent transparency]:
https://en.wikipedia.org/wiki/Order-independent_transparency

["Anti-aliased Alpha Test: The Esoteric Alpha To Coverage"]:
https://bgolus.medium.com/anti-aliased-alpha-test-the-esoteric-alpha-to-coverage-8b177335ae4f

---

## Changelog

### Added

* The `AlphaMode` enum now supports `AlphaToCoverage`, to provide
limited order-independent transparency when multisample antialiasing is
in use.
2024-04-15 20:37:52 +00:00
re0312
09a1f94d14
fix shadow pass trace (#12977)
# Objective

- shadow pass trace does not work correctly

## Solution

- enable it.
2024-04-15 15:55:39 +00:00
Robert Swain
5f05e75a70
Fix 2D BatchedInstanceBuffer clear (#12922)
# Objective

- `cargo run --release --example bevymark -- --benchmark --waves 160
--per-wave 1000 --mode mesh2d` runs slower and slower over time due to
`no_gpu_preprocessing::write_batched_instance_buffer<bevy_sprite::mesh2d::mesh::Mesh2dPipeline>`
taking longer and longer because the `BatchedInstanceBuffer` is not
cleared

## Solution

- Split the `clear_batched_instance_buffers` system into CPU and GPU
versions
- Use the CPU version for 2D meshes
2024-04-15 05:00:43 +00:00
Patrick Walton
5caf085dac
Divide the single VisibleEntities list into separate lists for 2D meshes, 3D meshes, lights, and UI elements, for performance. (#12582)
This commit splits `VisibleEntities::entities` into four separate lists:
one for lights, one for 2D meshes, one for 3D meshes, and one for UI
elements. This allows `queue_material_meshes` and similar methods to
avoid examining entities that are obviously irrelevant. In particular,
this separation helps scenes with many skinned meshes, as the individual
bones are considered visible entities but have no rendered appearance.

Internally, `VisibleEntities::entities` is a `HashMap` from the `TypeId`
representing a `QueryFilter` to the appropriate `Entity` list. I had to
do this because `VisibleEntities` is located within an upstream crate
from the crates that provide lights (`bevy_pbr`) and 2D meshes
(`bevy_sprite`). As an added benefit, this setup allows apps to provide
their own types of renderable components, by simply adding a specialized
`check_visibility` to the schedule.

This provides a 16.23% end-to-end speedup on `many_foxes` with 10,000
foxes (24.06 ms/frame to 20.70 ms/frame).

## Migration guide

* `check_visibility` and `VisibleEntities` now store the four types of
renderable entities--2D meshes, 3D meshes, lights, and UI
elements--separately. If your custom rendering code examines
`VisibleEntities`, it will now need to specify which type of entity it's
interested in using the `WithMesh2d`, `WithMesh`, `WithLight`, and
`WithNode` types respectively. If your app introduces a new type of
renderable entity, you'll need to add an explicit call to
`check_visibility` to the schedule to accommodate your new component or
components.

## Analysis

`many_foxes`, 10,000 foxes: `main`:
![Screenshot 2024-03-31
114444](https://github.com/bevyengine/bevy/assets/157897/16ecb2ff-6e04-46c0-a4b0-b2fde2084bad)

`many_foxes`, 10,000 foxes, this branch:
![Screenshot 2024-03-31
114256](https://github.com/bevyengine/bevy/assets/157897/94dedae4-bd00-45b2-9aaf-dfc237004ddb)

`queue_material_meshes` (yellow = this branch, red = `main`):
![Screenshot 2024-03-31
114637](https://github.com/bevyengine/bevy/assets/157897/f90912bd-45bd-42c4-bd74-57d98a0f036e)

`queue_shadows` (yellow = this branch, red = `main`):
![Screenshot 2024-03-31
114607](https://github.com/bevyengine/bevy/assets/157897/6ce693e3-20c0-4234-8ec9-a6f191299e2d)
2024-04-11 20:33:20 +00:00
Patrick Walton
d59b1e71ef
Implement percentage-closer filtering (PCF) for point lights. (#12910)
I ported the two existing PCF techniques to the cubemap domain as best I
could. Generally, the technique is to create a 2D orthonormal basis
using Gram-Schmidt normalization, then apply the technique over that
basis. The results look fine, though the shadow bias often needs
adjusting.

For comparison, Unity uses a 4-tap pattern for PCF on point lights of
(1, 1, 1), (-1, -1, 1), (-1, 1, -1), (1, -1, -1). I tried this but
didn't like the look, so I went with the design above, which ports the
2D techniques to the 3D domain. There's surprisingly little material on
point light PCF.

I've gone through every example using point lights and verified that the
shadow maps look fine, adjusting biases as necessary.

Fixes #3628.

---

## Changelog

### Added
* Shadows from point lights now support percentage-closer filtering
(PCF), and as a result look less aliased.

### Changed
* `ShadowFilteringMethod::Castano13` and
`ShadowFilteringMethod::Jimenez14` have been renamed to
`ShadowFilteringMethod::Gaussian` and `ShadowFilteringMethod::Temporal`
respectively.

## Migration Guide

* `ShadowFilteringMethod::Castano13` and
`ShadowFilteringMethod::Jimenez14` have been renamed to
`ShadowFilteringMethod::Gaussian` and `ShadowFilteringMethod::Temporal`
respectively.
2024-04-10 20:16:08 +00:00
Patrick Walton
11817f4ba4
Generate MeshUniforms on the GPU via compute shader where available. (#12773)
Currently, `MeshUniform`s are rather large: 160 bytes. They're also
somewhat expensive to compute, because they involve taking the inverse
of a 3x4 matrix. Finally, if a mesh is present in multiple views, that
mesh will have a separate `MeshUniform` for each and every view, which
is wasteful.

This commit fixes these issues by introducing the concept of a *mesh
input uniform* and adding a *mesh uniform building* compute shader pass.
The `MeshInputUniform` is simply the minimum amount of data needed for
the GPU to compute the full `MeshUniform`. Most of this data is just the
transform and is therefore only 64 bytes. `MeshInputUniform`s are
computed during the *extraction* phase, much like skins are today, in
order to avoid needlessly copying transforms around on CPU. (In fact,
the render app has been changed to only store the translation of each
mesh; it no longer cares about any other part of the transform, which is
stored only on the GPU and the main world.) Before rendering, the
`build_mesh_uniforms` pass runs to expand the `MeshInputUniform`s to the
full `MeshUniform`.

The mesh uniform building pass does the following, all on GPU:

1. Copy the appropriate fields of the `MeshInputUniform` to the
`MeshUniform` slot. If a single mesh is present in multiple views, this
effectively duplicates it into each view.

2. Compute the inverse transpose of the model transform, used for
transforming normals.

3. If applicable, copy the mesh's transform from the previous frame for
TAA. To support this, we double-buffer the `MeshInputUniform`s over two
frames and swap the buffers each frame. The `MeshInputUniform`s for the
current frame contain the index of that mesh's `MeshInputUniform` for
the previous frame.

This commit produces wins in virtually every CPU part of the pipeline:
`extract_meshes`, `queue_material_meshes`,
`batch_and_prepare_render_phase`, and especially
`write_batched_instance_buffer` are all faster. Shrinking the amount of
CPU data that has to be shuffled around speeds up the entire rendering
process.

| Benchmark              | This branch | `main`  | Speedup |
|------------------------|-------------|---------|---------|
| `many_cubes -nfc`      |      17.259 |  24.529 |  42.12% |
| `many_cubes -nfc -vpi` |     302.116 | 312.123 |   3.31% |
| `many_foxes`           |       3.227 |   3.515 |   8.92% |

Because mesh uniform building requires compute shader, and WebGL 2 has
no compute shader, the existing CPU mesh uniform building code has been
left as-is. Many types now have both CPU mesh uniform building and GPU
mesh uniform building modes. Developers can opt into the old CPU mesh
uniform building by setting the `use_gpu_uniform_builder` option on
`PbrPlugin` to `false`.

Below are graphs of the CPU portions of `many-cubes
--no-frustum-culling`. Yellow is this branch, red is `main`.

`extract_meshes`:
![Screenshot 2024-04-02
124842](https://github.com/bevyengine/bevy/assets/157897/a6748ea4-dd05-47b6-9254-45d07d33cb10)
It's notable that we get a small win even though we're now writing to a
GPU buffer.

`queue_material_meshes`:
![Screenshot 2024-04-02
124911](https://github.com/bevyengine/bevy/assets/157897/ecb44d78-65dc-448d-ba85-2de91aa2ad94)
There's a bit of a regression here; not sure what's causing it. In any
case it's very outweighed by the other gains.

`batch_and_prepare_render_phase`:
![Screenshot 2024-04-02
125123](https://github.com/bevyengine/bevy/assets/157897/4e20fc86-f9dd-4e5c-8623-837e4258f435)
There's a huge win here, enough to make batching basically drop off the
profile.

`write_batched_instance_buffer`:
![Screenshot 2024-04-02
125237](https://github.com/bevyengine/bevy/assets/157897/401a5c32-9dc1-4991-996d-eb1cac6014b2)
There's a massive improvement here, as expected. Note that a lot of it
simply comes from the fact that `MeshInputUniform` is `Pod`. (This isn't
a maintainability problem in my view because `MeshInputUniform` is so
simple: just 16 tightly-packed words.)

## Changelog

### Added

* Per-mesh instance data is now generated on GPU with a compute shader
instead of CPU, resulting in rendering performance improvements on
platforms where compute shaders are supported.

## Migration guide

* Custom render phases now need multiple systems beyond just
`batch_and_prepare_render_phase`. Code that was previously creating
custom render phases should now add a `BinnedRenderPhasePlugin` or
`SortedRenderPhasePlugin` as appropriate instead of directly adding
`batch_and_prepare_render_phase`.
2024-04-10 05:33:32 +00:00
Robert Swain
ab7cbfa8fc
Consolidate Render(Ui)Materials(2d) into RenderAssets (#12827)
# Objective

- Replace `RenderMaterials` / `RenderMaterials2d` / `RenderUiMaterials`
with `RenderAssets` to enable implementing changes to one thing,
`RenderAssets`, that applies to all use cases rather than duplicating
changes everywhere for multiple things that should be one thing.
- Adopts #8149 

## Solution

- Make RenderAsset generic over the destination type rather than the
source type as in #8149
- Use `RenderAssets<PreparedMaterial<M>>` etc for render materials

---

## Changelog

- Changed:
- The `RenderAsset` trait is now implemented on the destination type.
Its `SourceAsset` associated type refers to the type of the source
asset.
- `RenderMaterials`, `RenderMaterials2d`, and `RenderUiMaterials` have
been replaced by `RenderAssets<PreparedMaterial<M>>` and similar.

## Migration Guide

- `RenderAsset` is now implemented for the destination type rather that
the source asset type. The source asset type is now the `RenderAsset`
trait's `SourceAsset` associated type.
2024-04-09 13:26:34 +00:00
UkoeHB
2ee69807b1
Fix potential out-of-bounds access in pbr_functions.wgsl (#12585)
# Objective

- Fix a potential out-of-bounds access in the `pbr_functions.wgsl`
shader.

## Solution

- Correctly compute the `GpuLights::directional_lights` array length.

## Comments

I think this solves this comment in the code, but need someone to test
it:
```rust
//NOTE: When running bevy on Adreno GPU chipsets in WebGL, any value above 1 will result in a crash
// when loading the wgsl "pbr_functions.wgsl" in the function apply_fog.
```
2024-04-08 17:00:09 +00:00
Martín Maita
3fc0c6869d
Bump crate-ci/typos from 1.19.0 to 1.20.4 (#12907)
# Objective

- Adopting https://github.com/bevyengine/bevy/pull/12903.

## Solution

- Bump crate-ci/typos from 1.19.0 to 1.20.4.
- Fixed a typo in `crates/bevy_pbr/src/render/pbr_functions.wgsl` file.
- Added "PNG", "iy" and "SME" as exceptions to prevent false positives.

---------

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-04-08 15:31:11 +00:00
JMS55
31b5943ad4
Add previous_view_uniforms.inverse_view (#12902)
# Objective
- Upload previous frame's inverse_view matrix to the GPU for use with
https://github.com/bevyengine/bevy/pull/12898.

---

## Changelog
- Added `prepass_bindings::previous_view_uniforms.inverse_view`.
- Renamed `prepass_bindings::previous_view_proj` to
`prepass_bindings::previous_view_uniforms.view_proj`.
- Renamed `PreviousViewProjectionUniformOffset` to
`PreviousViewUniformOffset`.
- Renamed `PreviousViewProjection` to `PreviousViewData`.

## Migration Guide
- Renamed `prepass_bindings::previous_view_proj` to
`prepass_bindings::previous_view_uniforms.view_proj`.
- Renamed `PreviousViewProjectionUniformOffset` to
`PreviousViewUniformOffset`.
- Renamed `PreviousViewProjection` to `PreviousViewData`.
2024-04-07 18:59:16 +00:00
robtfm
452821dd52
more robust gpu image use (#12606)
# Objective

make morph targets and tonemapping more tolerant of delayed image
loading.

neither of these actually fail currently unless using a bespoke loader
(and even then it would be rare), but i am working on adding throttling
for asset gpu uploads (as a stopgap until we can do proper asset
streaming) and they break with that.

## Solution

when a mesh with morph targets is uploaded to the gpu, the prepare
function uploads the morph target texture if it's available, otherwise
it uploads without morph targets. this is generally fine as long as
morph targets are typically loaded from bytes (in gltf loader), but may
fail for a custom loader if the asset server async-loads the target
texture and the texture is not available yet. the mesh fails to render
and doesn't update when the image is loaded
-> if morph targets are specified but not ready yet, retry mesh upload
next frame

tonemapping `unwrap`s on the lookup table image. this is never a problem
since the image is added via `include_bytes!`, but could be a problem in
future with asset gpu throttling/streaming.
-> if the lookup texture is not yet available, use a fallback
-> in the node, check if the fallback was used before caching the bind
group
2024-04-07 17:18:58 +00:00
François Mockers
a9964f442d
fix msaa shift with irradiance volumes in mesh pipeline key (#12845)
# Objective

- #12791 broke example `irradiance_volumes`
- Fixes #12876 

```
wgpu error: Validation Error

Caused by:
    In Device::create_render_pipeline
      note: label = `pbr_opaque_mesh_pipeline`
    Color state [0] is invalid
    Sample count 8 is not supported by format Rgba8UnormSrgb on this device. The WebGPU spec guarentees [1, 4] samples are supported by this format. With the TEXTURE_ADAPTER_SPECIFIC_FORMAT_FEATURES feature your device supports [1, 2, 4].
```

## Solution

- Shift bits a bit more
2024-04-05 17:50:23 +00:00
James Liu
a4ed1b88b8
Relax BufferVec's type constraints (#12866)
# Objective
Since BufferVec was first introduced, `bytemuck` has added additional
traits with fewer restrictions than `Pod`. Within BufferVec, we only
rely on the constraints of `bytemuck::cast_slice` to a `u8` slice, which
now only requires `T: NoUninit` which is a strict superset of `Pod`
types.

## Solution
Change out the `Pod` generic type constraint with `NoUninit`. Also
taking the opportunity to substitute `cast_slice` with
`must_cast_slice`, which avoids a runtime panic in place of a compile
time failure if `T` cannot be used.

---

## Changelog
Changed: `BufferVec` now supports working with types containing
`NoUninit` but not `Pod` members.
Changed: `BufferVec` will now fail to compile if used with a type that
cannot be safely read from. Most notably, this includes ZSTs, which
would previously always panic at runtime.
2024-04-05 02:11:41 +00:00