Commit graph

175 commits

Author SHA1 Message Date
Chris Biscardi
4d3f43131e
Use a ship in Transform::align example (#13935)
# Objective

The documentation for
[`Transform::align`](https://docs.rs/bevy/0.14.0-rc.3/bevy/transform/components/struct.Transform.html#method.align)
mentions a hypothetical ship model. Showing this concretely would be a
nice improvement over using a cube.

> For example, if a spaceship model has its nose pointing in the
X-direction in its own local coordinates and its dorsal fin pointing in
the Y-direction, then align(Dir3::X, v, Dir3::Y, w) will make the
spaceship’s nose point in the direction of v, while the dorsal fin does
its best to point in the direction w.


## Solution

This commit makes the ship less hypothetical by using a kenney ship
model in the example.

The local axes for the ship needed to change to accommodate the gltf, so
the hypothetical in the documentation and this example's local axes
don't necessarily match. Docs use `align(Dir3::X, v, Dir3::Y, w)` and
this example now uses `(Vec3::NEG_Z, *first, Vec3::X, *second)`.

I manually modified the `craft_speederD` Node's `translation` to be
0,0,0 in the gltf file, which means it now differs from kenney's
original model.

Original ship from: https://kenney.nl/assets/space-kit

## Testing

```
cargo run --example align
```

![screenshot-2024-06-19-at-14 27
05@2x](https://github.com/bevyengine/bevy/assets/551247/ab1afc8f-76b2-42b6-b455-f0d1c77cfed7)
![screenshot-2024-06-19-at-14 27
12@2x](https://github.com/bevyengine/bevy/assets/551247/4a01031c-4ea1-43ab-8078-3656db67efe0)
![screenshot-2024-06-19-at-14 27
20@2x](https://github.com/bevyengine/bevy/assets/551247/06830f38-ba2b-4e3a-a265-2d10f9ea9de9)
2024-06-20 00:58:00 +00:00
JMS55
c50a4d8821
Remove unused mip_bias parameter from apply_normal_mapping (#13752)
Mip bias is no longer used here
2024-06-10 13:00:34 +00:00
Patrick Walton
df8ccb8735
Implement PBR anisotropy per KHR_materials_anisotropy. (#13450)
This commit implements support for physically-based anisotropy in Bevy's
`StandardMaterial`, following the specification for the
[`KHR_materials_anisotropy`] glTF extension.

[*Anisotropy*] (not to be confused with [anisotropic filtering]) is a
PBR feature that allows roughness to vary along the tangent and
bitangent directions of a mesh. In effect, this causes the specular
light to stretch out into lines instead of a round lobe. This is useful
for modeling brushed metal, hair, and similar surfaces. Support for
anisotropy is a common feature in major game and graphics engines;
Unity, Unreal, Godot, three.js, and Blender all support it to varying
degrees.

Two new parameters have been added to `StandardMaterial`:
`anisotropy_strength` and `anisotropy_rotation`. Anisotropy strength,
which ranges from 0 to 1, represents how much the roughness differs
between the tangent and the bitangent of the mesh. In effect, it
controls how stretched the specular highlight is. Anisotropy rotation
allows the roughness direction to differ from the tangent of the model.

In addition to these two fixed parameters, an *anisotropy texture* can
be supplied. Such a texture should be a 3-channel RGB texture, where the
red and green values specify a direction vector using the same
conventions as a normal map ([0, 1] color values map to [-1, 1] vector
values), and the the blue value represents the strength. This matches
the format that the [`KHR_materials_anisotropy`] specification requires.
Such textures should be loaded as linear and not sRGB. Note that this
texture does consume one additional texture binding in the standard
material shader.

The glTF loader has been updated to properly parse the
`KHR_materials_anisotropy` extension.

A new example, `anisotropy`, has been added. This example loads and
displays the barn lamp example from the [`glTF-Sample-Assets`]
repository. Note that the textures were rather large, so I shrunk them
down and converted them to a mixture of JPEG and KTX2 format, in the
interests of saving space in the Bevy repository.

[*Anisotropy*]:
https://google.github.io/filament/Filament.md.html#materialsystem/anisotropicmodel

[anisotropic filtering]:
https://en.wikipedia.org/wiki/Anisotropic_filtering

[`KHR_materials_anisotropy`]:
https://github.com/KhronosGroup/glTF/blob/main/extensions/2.0/Khronos/KHR_materials_anisotropy/README.md

[`glTF-Sample-Assets`]:
https://github.com/KhronosGroup/glTF-Sample-Assets/

## Changelog

### Added

* Physically-based anisotropy is now available for materials, which
enhances the look of surfaces such as brushed metal or hair. glTF scenes
can use the new feature with the `KHR_materials_anisotropy` extension.

## Screenshots

With anisotropy:
![Screenshot 2024-05-20
233414](https://github.com/bevyengine/bevy/assets/157897/379f1e42-24e9-40b6-a430-f7d1479d0335)

Without anisotropy:
![Screenshot 2024-05-20
233420](https://github.com/bevyengine/bevy/assets/157897/aa220f05-b8e7-417c-9671-b242d4bf9fc4)
2024-06-03 23:46:06 +00:00
Ricky Taylor
9b9d3d81cb
Normalise matrix naming (#13489)
# Objective
- Fixes #10909
- Fixes #8492

## Solution
- Name all matrices `x_from_y`, for example `world_from_view`.

## Testing
- I've tested most of the 3D examples. The `lighting` example
particularly should hit a lot of the changes and appears to run fine.

---

## Changelog
- Renamed matrices across the engine to follow a `y_from_x` naming,
making the space conversion more obvious.

## Migration Guide
- `Frustum`'s `from_view_projection`, `from_view_projection_custom_far`
and `from_view_projection_no_far` were renamed to
`from_clip_from_world`, `from_clip_from_world_custom_far` and
`from_clip_from_world_no_far`.
- `ComputedCameraValues::projection_matrix` was renamed to
`clip_from_view`.
- `CameraProjection::get_projection_matrix` was renamed to
`get_clip_from_view` (this affects implementations on `Projection`,
`PerspectiveProjection` and `OrthographicProjection`).
- `ViewRangefinder3d::from_view_matrix` was renamed to
`from_world_from_view`.
- `PreviousViewData`'s members were renamed to `view_from_world` and
`clip_from_world`.
- `ExtractedView`'s `projection`, `transform` and `view_projection` were
renamed to `clip_from_view`, `world_from_view` and `clip_from_world`.
- `ViewUniform`'s `view_proj`, `unjittered_view_proj`,
`inverse_view_proj`, `view`, `inverse_view`, `projection` and
`inverse_projection` were renamed to `clip_from_world`,
`unjittered_clip_from_world`, `world_from_clip`, `world_from_view`,
`view_from_world`, `clip_from_view` and `view_from_clip`.
- `GpuDirectionalCascade::view_projection` was renamed to
`clip_from_world`.
- `MeshTransforms`' `transform` and `previous_transform` were renamed to
`world_from_local` and `previous_world_from_local`.
- `MeshUniform`'s `transform`, `previous_transform`,
`inverse_transpose_model_a` and `inverse_transpose_model_b` were renamed
to `world_from_local`, `previous_world_from_local`,
`local_from_world_transpose_a` and `local_from_world_transpose_b` (the
`Mesh` type in WGSL mirrors this, however `transform` and
`previous_transform` were named `model` and `previous_model`).
- `Mesh2dTransforms::transform` was renamed to `world_from_local`.
- `Mesh2dUniform`'s `transform`, `inverse_transpose_model_a` and
`inverse_transpose_model_b` were renamed to `world_from_local`,
`local_from_world_transpose_a` and `local_from_world_transpose_b` (the
`Mesh2d` type in WGSL mirrors this).
- In WGSL, in `bevy_pbr::mesh_functions`, `get_model_matrix` and
`get_previous_model_matrix` were renamed to `get_world_from_local` and
`get_previous_world_from_local`.
- In WGSL, `bevy_sprite::mesh2d_functions::get_model_matrix` was renamed
to `get_world_from_local`.
2024-06-03 16:56:53 +00:00
Mark Moissette
d26900a9ea
add handling of all missing gltf extras: scene, mesh & materials (#13453)
# Objective

- fixes #4823 

## Solution

As outlined in the discussion in the linked issue as the best current
solution, this PR adds specific GltfExtras for
 - scenes 
 - meshes
 - materials

- As it is , it is not a breaking change, I hesitated to rename the
current "GltfExtras" component to "PrimitiveGltfExtras", but that would
result in a breaking change and might be a bit confusing as to what
"primitive" that refers to.
 

## Testing

- I included a bare-bones example & asset (exported gltf file from
Blender) with gltf extras at all the relevant levels : scene, mesh,
material

---

## Changelog
- adds "SceneGltfExtras" injected at the scene level if any
- adds "MeshGltfExtras", injected at the mesh level if any
- adds "MaterialGltfExtras", injected at the mesh level if any: ie if a
mesh has a material that has gltf extras, the component will be injected
there.
2024-06-03 13:16:38 +00:00
Patrick Walton
f398674e51
Implement opt-in sharp screen-space reflections for the deferred renderer, with improved raymarching code. (#13418)
This commit, a revamp of #12959, implements screen-space reflections
(SSR), which approximate real-time reflections based on raymarching
through the depth buffer and copying samples from the final rendered
frame. This patch is a relatively minimal implementation of SSR, so as
to provide a flexible base on which to customize and build in the
future. However, it's based on the production-quality [raymarching code
by Tomasz
Stachowiak](https://gist.github.com/h3r2tic/9c8356bdaefbe80b1a22ae0aaee192db).

For a general basic overview of screen-space reflections, see
[1](https://lettier.github.io/3d-game-shaders-for-beginners/screen-space-reflection.html).
The raymarching shader uses the basic algorithm of tracing forward in
large steps, refining that trace in smaller increments via binary
search, and then using the secant method. No temporal filtering or
roughness blurring, is performed at all; for this reason, SSR currently
only operates on very shiny surfaces. No acceleration via the
hierarchical Z-buffer is implemented (though note that
https://github.com/bevyengine/bevy/pull/12899 will add the
infrastructure for this). Reflections are traced at full resolution,
which is often considered slow. All of these improvements and more can
be follow-ups.

SSR is built on top of the deferred renderer and is currently only
supported in that mode. Forward screen-space reflections are possible
albeit uncommon (though e.g. *Doom Eternal* uses them); however, they
require tracing from the previous frame, which would add complexity.
This patch leaves the door open to implementing SSR in the forward
rendering path but doesn't itself have such an implementation.
Screen-space reflections aren't supported in WebGL 2, because they
require sampling from the depth buffer, which Naga can't do because of a
bug (`sampler2DShadow` is incorrectly generated instead of `sampler2D`;
this is the same reason why depth of field is disabled on that
platform).

To add screen-space reflections to a camera, use the
`ScreenSpaceReflectionsBundle` bundle or the
`ScreenSpaceReflectionsSettings` component. In addition to
`ScreenSpaceReflectionsSettings`, `DepthPrepass` and `DeferredPrepass`
must also be present for the reflections to show up. The
`ScreenSpaceReflectionsSettings` component contains several settings
that artists can tweak, and also comes with sensible defaults.

A new example, `ssr`, has been added. It's loosely based on the
[three.js ocean
sample](https://threejs.org/examples/webgl_shaders_ocean.html), but all
the assets are original. Note that the three.js demo has no screen-space
reflections and instead renders a mirror world. In contrast to #12959,
this demo tests not only a cube but also a more complex model (the
flight helmet).

## Changelog

### Added

* Screen-space reflections can be enabled for very smooth surfaces by
adding the `ScreenSpaceReflections` component to a camera. Deferred
rendering must be enabled for the reflections to appear.

![Screenshot 2024-05-18
143555](https://github.com/bevyengine/bevy/assets/157897/b8675b39-8a89-433e-a34e-1b9ee1233267)

![Screenshot 2024-05-18
143606](https://github.com/bevyengine/bevy/assets/157897/cc9e1cd0-9951-464a-9a08-e589210e5606)
2024-05-27 13:43:40 +00:00
François Mockers
5a1c62faae
fix lava emissive strength in depth of field example (#13449)
# Objective

- Since #13350 the lava is way too bright

![ba6ab328bc334085a237550e66ce91effa8950c7241ddef3eb10a8be336a68ef](https://github.com/bevyengine/bevy/assets/8672791/3a7aa7c0-5481-4616-8f84-2b69b57cf81c)

## Solution

- Change the emissive value in the material in the glb file from 20000
to 20

## Testing

- Run the example `depth_of_field`
2024-05-21 19:35:08 +00:00
Patrick Walton
19bfa41768
Implement volumetric fog and volumetric lighting, also known as light shafts or god rays. (#13057)
This commit implements a more physically-accurate, but slower, form of
fog than the `bevy_pbr::fog` module does. Notably, this *volumetric fog*
allows for light beams from directional lights to shine through,
creating what is known as *light shafts* or *god rays*.

To add volumetric fog to a scene, add `VolumetricFogSettings` to the
camera, and add `VolumetricLight` to directional lights that you wish to
be volumetric. `VolumetricFogSettings` has numerous settings that allow
you to define the accuracy of the simulation, as well as the look of the
fog. Currently, only interaction with directional lights that have
shadow maps is supported. Note that the overhead of the effect scales
directly with the number of directional lights in use, so apply
`VolumetricLight` sparingly for the best results.

The overall algorithm, which is implemented as a postprocessing effect,
is a combination of the techniques described in [Scratchapixel] and
[this blog post]. It uses raymarching in screen space, transformed into
shadow map space for sampling and combined with physically-based
modeling of absorption and scattering. Bevy employs the widely-used
[Henyey-Greenstein phase function] to model asymmetry; this essentially
allows light shafts to fade into and out of existence as the user views
them.

Volumetric rendering is a huge subject, and I deliberately kept the
scope of this commit small. Possible follow-ups include:

1. Raymarching at a lower resolution.

2. A post-processing blur (especially useful when combined with (1)).

3. Supporting point lights and spot lights.

4. Supporting lights with no shadow maps.

5. Supporting irradiance volumes and reflection probes.

6. Voxel components that reuse the volumetric fog code to create voxel
shapes.

7. *Horizon: Zero Dawn*-style clouds.

These are all useful, but out of scope of this patch for now, to keep
things tidy and easy to review.

A new example, `volumetric_fog`, has been added to demonstrate the
effect.

## Changelog

### Added

* A new component, `VolumetricFog`, is available, to allow for a more
physically-accurate, but more resource-intensive, form of fog.

* A new component, `VolumetricLight`, can be placed on directional
lights to make them interact with `VolumetricFog`. Notably, this allows
such lights to emit light shafts/god rays.

![Screenshot 2024-04-21
162808](https://github.com/bevyengine/bevy/assets/157897/7a1fc81d-eed5-4735-9419-286c496391a9)

![Screenshot 2024-04-21
132005](https://github.com/bevyengine/bevy/assets/157897/e6d3b5ca-8f59-488d-a3de-15e95aaf4995)

[Scratchapixel]:
https://www.scratchapixel.com/lessons/3d-basic-rendering/volume-rendering-for-developers/intro-volume-rendering.html

[this blog post]: https://www.alexandre-pestana.com/volumetric-lights/

[Henyey-Greenstein phase function]:
https://www.pbr-book.org/4ed/Volume_Scattering/Phase_Functions#TheHenyeyndashGreensteinPhaseFunction
2024-05-16 17:13:18 +00:00
Patrick Walton
df31b808c3
Implement fast depth of field as a postprocessing effect. (#13009)
This commit implements the [depth of field] effect, simulating the blur
of objects out of focus of the virtual lens. Either the [hexagonal
bokeh] effect or a faster Gaussian blur may be used. In both cases, the
implementation is a simple separable two-pass convolution. This is not
the most physically-accurate real-time bokeh technique that exists;
Unreal Engine has [a more accurate implementation] of "cinematic depth
of field" from 2018. However, it's simple, and most engines provide
something similar as a fast option, often called "mobile" depth of
field.

The general approach is outlined in [a blog post from 2017]. We take
advantage of the fact that both Gaussian blurs and hexagonal bokeh blurs
are *separable*. This means that their 2D kernels can be reduced to a
small number of 1D kernels applied one after another, asymptotically
reducing the amount of work that has to be done. Gaussian blurs can be
accomplished by blurring horizontally and then vertically, while
hexagonal bokeh blurs can be done with a vertical blur plus a diagonal
blur, plus two diagonal blurs. In both cases, only two passes are
needed. Bokeh requires the first pass to have a second render target and
requires two subpasses in the second pass, which decreases its
performance relative to the Gaussian blur.

The bokeh blur is generally more aesthetically pleasing than the
Gaussian blur, as it simulates the effect of a camera more accurately.
The shape of the bokeh circles are determined by the number of blades of
the aperture. In our case, we use a hexagon, which is usually considered
specific to lower-quality cameras. (This is a downside of the fast
hexagon approach compared to the higher-quality approaches.) The blur
amount is generally specified by the [f-number], which we use to compute
the focal length from the film size and FOV. By default, we simulate
standard cinematic cameras of f/1 and [Super 35]. The developer can
customize these values as desired.

A new example has been added to demonstrate depth of field. It allows
customization of the mode (Gaussian vs. bokeh), focal distance and
f-numbers. The test scene is inspired by a [blog post on depth of field
in Unity]; however, the effect is implemented in a completely different
way from that blog post, and all the assets (textures, etc.) are
original.

Bokeh depth of field:
![Screenshot 2024-04-17
152535](https://github.com/bevyengine/bevy/assets/157897/702f0008-1c8a-4cf3-b077-4110f8c46584)

Gaussian depth of field:
![Screenshot 2024-04-17
152542](https://github.com/bevyengine/bevy/assets/157897/f4ece47a-520e-4483-a92d-f4fa760795d3)

No depth of field:
![Screenshot 2024-04-17
152547](https://github.com/bevyengine/bevy/assets/157897/9444e6aa-fcae-446c-b66b-89469f1a1325)

[depth of field]: https://en.wikipedia.org/wiki/Depth_of_field

[hexagonal bokeh]:
https://colinbarrebrisebois.com/2017/04/18/hexagonal-bokeh-blur-revisited/

[a more accurate implementation]:
https://epicgames.ent.box.com/s/s86j70iamxvsuu6j35pilypficznec04

[a blog post from 2017]:
https://colinbarrebrisebois.com/2017/04/18/hexagonal-bokeh-blur-revisited/

[f-number]: https://en.wikipedia.org/wiki/F-number

[Super 35]: https://en.wikipedia.org/wiki/Super_35

[blog post on depth of field in Unity]:
https://catlikecoding.com/unity/tutorials/advanced-rendering/depth-of-field/

## Changelog

### Added

* A depth of field postprocessing effect is now available, to simulate
objects being out of focus of the camera. To use it, add
`DepthOfFieldSettings` to an entity containing a `Camera3d` component.

---------

Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
Co-authored-by: Bram Buurlage <brambuurlage@gmail.com>
2024-05-13 18:23:56 +00:00
Patrick Walton
77ed72bc16
Implement clearcoat per the Filament and the KHR_materials_clearcoat specifications. (#13031)
Clearcoat is a separate material layer that represents a thin
translucent layer of a material. Examples include (from the [Filament
spec]) car paint, soda cans, and lacquered wood. This commit implements
support for clearcoat following the Filament and Khronos specifications,
marking the beginnings of support for multiple PBR layers in Bevy.

The [`KHR_materials_clearcoat`] specification describes the clearcoat
support in glTF. In Blender, applying a clearcoat to the Principled BSDF
node causes the clearcoat settings to be exported via this extension. As
of this commit, Bevy parses and reads the extension data when present in
glTF. Note that the `gltf` crate has no support for
`KHR_materials_clearcoat`; this patch therefore implements the JSON
semantics manually.

Clearcoat is integrated with `StandardMaterial`, but the code is behind
a series of `#ifdef`s that only activate when clearcoat is present.
Additionally, the `pbr_feature_layer_material_textures` Cargo feature
must be active in order to enable support for clearcoat factor maps,
clearcoat roughness maps, and clearcoat normal maps. This approach
mirrors the same pattern used by the existing transmission feature and
exists to avoid running out of texture bindings on platforms like WebGL
and WebGPU. Note that constant clearcoat factors and roughness values
*are* supported in the browser; only the relatively-less-common maps are
disabled on those platforms.

This patch refactors the lighting code in `StandardMaterial`
significantly in order to better support multiple layers in a natural
way. That code was due for a refactor in any case, so this is a nice
improvement.

A new demo, `clearcoat`, has been added. It's based on [the
corresponding three.js demo], but all the assets (aside from the skybox
and environment map) are my original work.

[Filament spec]:
https://google.github.io/filament/Filament.html#materialsystem/clearcoatmodel

[`KHR_materials_clearcoat`]:
https://github.com/KhronosGroup/glTF/blob/main/extensions/2.0/Khronos/KHR_materials_clearcoat/README.md

[the corresponding three.js demo]:
https://threejs.org/examples/webgl_materials_physical_clearcoat.html

![Screenshot 2024-04-19
101143](https://github.com/bevyengine/bevy/assets/157897/3444bcb5-5c20-490c-b0ad-53759bd47ae2)

![Screenshot 2024-04-19
102054](https://github.com/bevyengine/bevy/assets/157897/6e953944-75b8-49ef-bc71-97b0a53b3a27)

## Changelog

### Added

* `StandardMaterial` now supports a clearcoat layer, which represents a
thin translucent layer over an underlying material.
* The glTF loader now supports the `KHR_materials_clearcoat` extension,
representing materials with clearcoat layers.

## Migration Guide

* The lighting functions in the `pbr_lighting` WGSL module now have
clearcoat parameters, if `STANDARD_MATERIAL_CLEARCOAT` is defined.

* The `R` reflection vector parameter has been removed from some
lighting functions, as it was unused.
2024-05-05 22:57:05 +00:00
Vitaliy Sapronenko
088960f597
Example with repeated texture (#13176)
# Objective

Fixes #11136 .
Fixes https://github.com/bevyengine/bevy/pull/11161.

## Solution

- Set image sampler with repeated mode for u and v
- set uv_transform of StandardMaterial to resizing params

## Testing

Got this view on example run

![image](https://github.com/bevyengine/bevy/assets/17225606/a5f7c414-7966-4c31-97e1-320241ddc75b)
2024-05-05 17:29:26 +00:00
arcashka
6027890a11
move wgsl color operations from bevy_pbr to bevy_render (#13209)
# Objective

`bevy_pbr/utils.wgsl` shader file contains mathematical constants and
color conversion functions. Both of those should be accessible without
enabling `bevy_pbr` feature. For example, tonemapping can be done in non
pbr scenario, and it uses color conversion functions.

Fixes #13207

## Solution

* Move mathematical constants (such as PI, E) from
`bevy_pbr/src/render/utils.wgsl` into `bevy_render/src/maths.wgsl`
* Move color conversion functions from `bevy_pbr/src/render/utils.wgsl`
into new file `bevy_render/src/color_operations.wgsl`

## Testing
Ran multiple examples, checked they are working:
* tonemapping
* color_grading
* 3d_scene
* animated_material
* deferred_rendering
* 3d_shapes
* fog
* irradiance_volumes
* meshlet
* parallax_mapping
* pbr
* reflection_probes
* shadow_biases
* 2d_gizmos
* light_gizmos
---

## Changelog
* Moved mathematical constants (such as PI, E) from
`bevy_pbr/src/render/utils.wgsl` into `bevy_render/src/maths.wgsl`
* Moved color conversion functions from `bevy_pbr/src/render/utils.wgsl`
into new file `bevy_render/src/color_operations.wgsl`

## Migration Guide
In user's shader code replace usage of mathematical constants from
`bevy_pbr::utils` to the usage of the same constants from
`bevy_render::maths`.
2024-05-04 10:30:23 +00:00
Bram Buurlage
d390420093
Implement Auto Exposure plugin (#12792)
# Objective

- Add auto exposure/eye adaptation to the bevy render pipeline.
- Support features that users might expect from other engines:
  - Metering masks
  - Compensation curves
  - Smooth exposure transitions 

This PR is based on an implementation I already built for a personal
project before https://github.com/bevyengine/bevy/pull/8809 was
submitted, so I wasn't able to adopt that PR in the proper way. I've
still drawn inspiration from it, so @fintelia should be credited as
well.

## Solution

An auto exposure compute shader builds a 64 bin histogram of the scene's
luminance, and then adjusts the exposure based on that histogram. Using
a histogram allows the system to ignore outliers like shadows and
specular highlights, and it allows to give more weight to certain areas
based on a mask.

---

## Changelog

- Added: AutoExposure plugin that allows to adjust a camera's exposure
based on it's scene's luminance.

---------

Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
2024-05-03 17:45:17 +00:00
Patrick Walton
31835ff76d
Implement visibility ranges, also known as hierarchical levels of detail (HLODs). (#12916)
Implement visibility ranges, also known as hierarchical levels of detail
(HLODs).

This commit introduces a new component, `VisibilityRange`, which allows
developers to specify camera distances in which meshes are to be shown
and hidden. Hiding meshes happens early in the rendering pipeline, so
this feature can be used for level of detail optimization. Additionally,
this feature is properly evaluated per-view, so different views can show
different levels of detail.

This feature differs from proper mesh LODs, which can be implemented
later. Engines generally implement true mesh LODs later in the pipeline;
they're typically more efficient than HLODs with GPU-driven rendering.
However, mesh LODs are more limited than HLODs, because they require the
lower levels of detail to be meshes with the same vertex layout and
shader (and perhaps the same material) as the original mesh. Games often
want to use objects other than meshes to replace distant models, such as
*octahedral imposters* or *billboard imposters*.

The reason why the feature is called *hierarchical level of detail* is
that HLODs can replace multiple meshes with a single mesh when the
camera is far away. This can be useful for reducing drawcall count. Note
that `VisibilityRange` doesn't automatically propagate down to children;
it must be placed on every mesh.

Crossfading between different levels of detail is supported, using the
standard 4x4 ordered dithering pattern from [1]. The shader code to
compute the dithering patterns should be well-optimized. The dithering
code is only active when visibility ranges are in use for the mesh in
question, so that we don't lose early Z.

Cascaded shadow maps show the HLOD level of the view they're associated
with. Point light and spot light shadow maps, which have no CSMs,
display all HLOD levels that are visible in any view. To support this
efficiently and avoid doing visibility checks multiple times, we
precalculate all visible HLOD levels for each entity with a
`VisibilityRange` during the `check_visibility_range` system.

A new example, `visibility_range`, has been added to the tree, as well
as a new low-poly version of the flight helmet model to go with it. It
demonstrates use of the visibility range feature to provide levels of
detail.

[1]: https://en.wikipedia.org/wiki/Ordered_dithering#Threshold_map

[^1]: Unreal doesn't have a feature that exactly corresponds to
visibility ranges, but Unreal's HLOD system serves roughly the same
purpose.

## Changelog

### Added

* A new `VisibilityRange` component is available to conditionally enable
entity visibility at camera distances, with optional crossfade support.
This can be used to implement different levels of detail (LODs).

## Screenshots

High-poly model:
![Screenshot 2024-04-09
185541](https://github.com/bevyengine/bevy/assets/157897/7e8be017-7187-4471-8866-974e2d8f2623)

Low-poly model up close:
![Screenshot 2024-04-09
185546](https://github.com/bevyengine/bevy/assets/157897/429603fe-6bb7-4246-8b4e-b4888fd1d3a0)

Crossfading between the two:
![Screenshot 2024-04-09
185604](https://github.com/bevyengine/bevy/assets/157897/86d0d543-f8f3-49ec-8fe5-caa4d0784fd4)

---------

Co-authored-by: Carter Anderson <mcanders1@gmail.com>
2024-05-03 00:11:35 +00:00
Patrick Walton
961b24deaf
Implement filmic color grading. (#13121)
This commit expands Bevy's existing tonemapping feature to a complete
set of filmic color grading tools, matching those of engines like Unity,
Unreal, and Godot. The following features are supported:

* White point adjustment. This is inspired by Unity's implementation of
the feature, but simplified and optimized. *Temperature* and *tint*
control the adjustments to the *x* and *y* chromaticity values of [CIE
1931]. Following Unity, the adjustments are made relative to the [D65
standard illuminant] in the [LMS color space].

* Hue rotation. This simply converts the RGB value to [HSV], alters the
hue, and converts back.

* Color correction. This allows the *gamma*, *gain*, and *lift* values
to be adjusted according to the standard [ASC CDL combined function].

* Separate color correction for shadows, midtones, and highlights.
Blender's source code was used as a reference for the implementation of
this. The midtone ranges can be adjusted by the user. To avoid abrupt
color changes, a small crossfade is used between the different sections
of the image, again following Blender's formulas.

A new example, `color_grading`, has been added, offering a GUI to change
all the color grading settings. It uses the same test scene as the
existing `tonemapping` example, which has been factored out into a
shared glTF scene.

[CIE 1931]: https://en.wikipedia.org/wiki/CIE_1931_color_space

[D65 standard illuminant]:
https://en.wikipedia.org/wiki/Standard_illuminant#Illuminant_series_D

[LMS color space]: https://en.wikipedia.org/wiki/LMS_color_space

[HSV]: https://en.wikipedia.org/wiki/HSL_and_HSV

[ASC CDL combined function]:
https://en.wikipedia.org/wiki/ASC_CDL#Combined_Function

## Changelog

### Added

* Many new filmic color grading options have been added to the
`ColorGrading` component.

## Migration Guide

* `ColorGrading::gamma` and `ColorGrading::pre_saturation` are now set
separately for the `shadows`, `midtones`, and `highlights` sections. You
can migrate code with the `ColorGrading::all_sections` and
`ColorGrading::all_sections_mut` functions, which access and/or update
all sections at once.
* `ColorGrading::post_saturation` and `ColorGrading::exposure` are now
fields of `ColorGrading::global`.

## Screenshots

![Screenshot 2024-04-27
143144](https://github.com/bevyengine/bevy/assets/157897/c1de5894-917d-4101-b5c9-e644d141a941)

![Screenshot 2024-04-27
143216](https://github.com/bevyengine/bevy/assets/157897/da393c8a-d747-42f5-b47c-6465044c788d)
2024-05-02 12:18:59 +00:00
JMS55
6d6810c90d
Meshlet continuous LOD (#12755)
Adds a basic level of detail system to meshlets. An extremely brief
summary is as follows:
* In `from_mesh.rs`, once we've built the first level of clusters, we
group clusters, simplify the new mega-clusters, and then split the
simplified groups back into regular sized clusters. Repeat several times
(ideally until you can't anymore). This forms a directed acyclic graph
(DAG), where the children are the meshlets from the previous level, and
the parents are the more simplified versions of their children. The leaf
nodes are meshlets formed from the original mesh.
* In `cull_meshlets.wgsl`, each cluster selects whether to render or not
based on the LOD bounding sphere (different than the culling bounding
sphere) of the current meshlet, the LOD bounding sphere of its parent
(the meshlet group from simplification), and the simplification error
relative to its children of both the current meshlet and its parent
meshlet. This kind of breaks two pass occlusion culling, which will be
fixed in a future PR by using an HZB from the previous frame to get the
initial list of occluders.

Many, _many_ improvements to be done in the future
https://github.com/bevyengine/bevy/issues/11518, not least of which is
code quality and speed. I don't even expect this to work on many types
of input meshes. This is just a basic implementation/draft for
collaboration.

Arguable how much we want to do in this PR, I'll leave that up to
maintainers. I've erred on the side of "as basic as possible".

References:
* Slides 27-77 (video available on youtube)
https://advances.realtimerendering.com/s2021/Karis_Nanite_SIGGRAPH_Advances_2021_final.pdf
*
https://blog.traverseresearch.nl/creating-a-directed-acyclic-graph-from-a-mesh-1329e57286e5
*
https://jglrxavpok.github.io/2024/01/19/recreating-nanite-lod-generation.html,
https://jglrxavpok.github.io/2024/03/12/recreating-nanite-faster-lod-generation.html,
https://jglrxavpok.github.io/2024/04/02/recreating-nanite-runtime-lod-selection.html,
and https://github.com/jglrxavpok/Carrot
*
https://github.com/gents83/INOX/tree/master/crates/plugins/binarizer/src
* https://cs418.cs.illinois.edu/website/text/nanite.html


![image](https://github.com/bevyengine/bevy/assets/47158642/e40bff9b-7d0c-4a19-a3cc-2aad24965977)

![image](https://github.com/bevyengine/bevy/assets/47158642/442c7da3-7761-4da7-9acd-37f15dd13e26)

---------

Co-authored-by: Ricky Taylor <rickytaylor26@gmail.com>
Co-authored-by: vero <email@atlasdostal.com>
Co-authored-by: François <mockersf@gmail.com>
Co-authored-by: atlas dostal <rodol@rivalrebels.com>
Co-authored-by: Patrick Walton <pcwalton@mimiga.net>
2024-04-23 21:43:53 +00:00
François Mockers
de3ec47f3f
Fix example game of life (#12897)
# Objective

- Example `compute_shader_game_of_life` is random and not following the
rules of the game of life: at each steps, it randomly reads some pixel
of the current step and some of the previous step instead of only from
the previous step
- Fixes #9353 

## Solution

- Adopted from #9678 
- Added a switch of the texture displayed every frame otherwise the game
of life looks wrong
- Added a way to display the texture bigger so that I could manually
check everything was right

---------

Co-authored-by: Sludge <96552222+SludgePhD@users.noreply.github.com>
Co-authored-by: IceSentry <IceSentry@users.noreply.github.com>
2024-04-08 17:19:07 +00:00
IceSentry
08b41878d7
Add gpu readback example (#12877)
# Objective

- It's pretty common for users to want to read data back from the gpu
and into the main world

## Solution

- Add a simple example that shows how to read data back from the gpu and
send it to the main world using a channel.
- The example is largely based on this wgpu example but adapted to bevy
-
fb305b85f6/examples/src/repeated_compute/mod.rs

---------

Co-authored-by: stormy <120167078+stowmyy@users.noreply.github.com>
Co-authored-by: Torstein Grindvik <52322338+torsteingrindvik@users.noreply.github.com>
2024-04-08 17:08:20 +00:00
Remi Godin
df76fd4a1b
Programmed soundtrack example (#12774)
Created soundtrack example, fade-in and fade-out features, added new
assets, and updated credits.

# Objective
- Fixes #12651 

## Solution
- Created a resource to hold the track list. 
- The audio assets are then loaded by the asset server and added to the
track list.
- Once the game is in a specific state, an `AudioBundle` is spawned and
plays the appropriate track.
- The audio volume starts at zero and is then incremented gradually
until it reaches full volume.
- Once the game state changes, the current track fades out, and a new
one fades in at the same time, offering a relatively seamless
transition.
- Once a track is completely faded out, it is despawned from the app.
- Game state changes are simulated through a `Timer` for simplicity.
- Track change system is only run if there is a change in the
`GameState` resource.
- All tracks are used according to their respective licenses.

---------

Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
2024-03-29 20:32:30 +00:00
JMS55
4f20faaa43
Meshlet rendering (initial feature) (#10164)
# Objective
- Implements a more efficient, GPU-driven
(https://github.com/bevyengine/bevy/issues/1342) rendering pipeline
based on meshlets.
- Meshes are split into small clusters of triangles called meshlets,
each of which acts as a mini index buffer into the larger mesh data.
Meshlets can be compressed, streamed, culled, and batched much more
efficiently than monolithic meshes.


![image](https://github.com/bevyengine/bevy/assets/47158642/cb2aaad0-7a9a-4e14-93b0-15d4e895b26a)

![image](https://github.com/bevyengine/bevy/assets/47158642/7534035b-1eb7-4278-9b99-5322e4401715)

# Misc
* Future work: https://github.com/bevyengine/bevy/issues/11518
* Nanite reference:
https://advances.realtimerendering.com/s2021/Karis_Nanite_SIGGRAPH_Advances_2021_final.pdf
Two pass occlusion culling explained very well:
https://medium.com/@mil_kru/two-pass-occlusion-culling-4100edcad501

---------

Co-authored-by: Ricky Taylor <rickytaylor26@gmail.com>
Co-authored-by: vero <email@atlasdostal.com>
Co-authored-by: François <mockersf@gmail.com>
Co-authored-by: atlas dostal <rodol@rivalrebels.com>
2024-03-25 19:08:27 +00:00
Patrick Walton
dfdf2b9ea4
Implement the AnimationGraph, allowing for multiple animations to be blended together. (#11989)
This is an implementation of RFC #51:
https://github.com/bevyengine/rfcs/blob/main/rfcs/51-animation-composition.md

Note that the implementation strategy is different from the one outlined
in that RFC, because two-phase animation has now landed.

# Objective

Bevy needs animation blending. The RFC for this is [RFC 51].

## Solution

This is an implementation of the RFC. Note that the implementation
strategy is different from the one outlined there, because two-phase
animation has now landed.

This is just a draft to get the conversation started. Currently we're
missing a few things:

- [x] A fully-fleshed-out mechanism for transitions
- [x] A serialization format for `AnimationGraph`s
- [x] Examples are broken, other than `animated_fox`
- [x] Documentation

---

## Changelog

### Added

* The `AnimationPlayer` has been reworked to support blending multiple
animations together through an `AnimationGraph`, and as such will no
longer function unless a `Handle<AnimationGraph>` has been added to the
entity containing the player. See [RFC 51] for more details.

* Transition functionality has moved from the `AnimationPlayer` to a new
component, `AnimationTransitions`, which works in tandem with the
`AnimationGraph`.

## Migration Guide

* `AnimationPlayer`s can no longer play animations by themselves and
need to be paired with a `Handle<AnimationGraph>`. Code that was using
`AnimationPlayer` to play animations will need to create an
`AnimationGraph` asset first, add a node for the clip (or clips) you
want to play, and then supply the index of that node to the
`AnimationPlayer`'s `play` method.

* The `AnimationPlayer::play_with_transition()` method has been removed
and replaced with the `AnimationTransitions` component. If you were
previously using `AnimationPlayer::play_with_transition()`, add all
animations that you were playing to the `AnimationGraph`, and create an
`AnimationTransitions` component to manage the blending between them.

[RFC 51]:
https://github.com/bevyengine/rfcs/blob/main/rfcs/51-animation-composition.md

---------

Co-authored-by: Rob Parrett <robparrett@gmail.com>
2024-03-07 20:22:42 +00:00
Félix Lescaudey de Maneville
fc202f2e3d
Slicing support for texture atlas (#12059)
# Objective

Follow up to #11600 and #10588 
https://github.com/bevyengine/bevy/issues/11944 made clear that some
people want to use slicing with texture atlases

## Changelog

* Added support for `TextureAtlas` slicing and tiling.
`SpriteSheetBundle` and `AtlasImageBundle` can now use `ImageScaleMode`
* Added new `ui_texture_atlas_slice` example using a texture sheet

<img width="798" alt="Screenshot 2024-02-23 at 11 58 35"
src="https://github.com/bevyengine/bevy/assets/26703856/47a8b764-127c-4a06-893f-181703777501">

---------

Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
Co-authored-by: Pablo Reinhardt <126117294+pablo-lua@users.noreply.github.com>
2024-03-05 16:05:39 +00:00
François
d261a86b9f
compute shader game of life example: use R32Float instead of Rgba8Unorm (#12155)
# Objective

- Fixes #9670 
- Avoid a crash in CI due to
```
thread 'Compute Task Pool (0)' panicked at /Users/runner/.cargo/registry/src/index.crates.io-6f17d22bba15001f/wgpu-0.19.1/src/backend/wgpu_core.rs:3009:5:
wgpu error: Validation Error

Caused by:
    In Device::create_bind_group
    The adapter does not support read access for storages texture of format Rgba8Unorm
```

## Solution

- Use an `R32Float` texture instead of an `Rgba8Unorm` as it's a tier 1
texture format https://github.com/gpuweb/gpuweb/issues/3838 and is more
supported
- This should also improve support for webgpu in the next wgpu version
2024-02-27 13:57:41 +00:00
Félix Lescaudey de Maneville
ab16f5ed6a
UI Texture 9 slice (#11600)
> Follow up to #10588 
> Closes #11749 (Supersedes #11756)

Enable Texture slicing for the following UI nodes:
- `ImageBundle`
- `ButtonBundle`

<img width="739" alt="Screenshot 2024-01-29 at 13 57 43"
src="https://github.com/bevyengine/bevy/assets/26703856/37675681-74eb-4689-ab42-024310cf3134">

I also added a collection of `fantazy-ui-borders` from
[Kenney's](www.kenney.nl) assets, with the appropriate license (CC).
If it's a problem I can use the same textures as the `sprite_slice`
example

# Work done

Added the `ImageScaleMode` component to the targetted bundles, most of
the logic is directly reused from `bevy_sprite`.
The only additional internal component is the UI specific
`ComputedSlices`, which does the same thing as its spritee equivalent
but adapted to UI code.

Again the slicing is not compatible with `TextureAtlas`, it's something
I need to tackle more deeply in the future

# Fixes

* [x] I noticed that `TextureSlicer::compute_slices` could infinitely
loop if the border was larger that the image half extents, now an error
is triggered and the texture will fallback to being stretched
* [x] I noticed that when using small textures with very small *tiling*
options we could generate hundred of thousands of slices. Now I set a
minimum size of 1 pixel per slice, which is already ridiculously small,
and a warning will be sent at runtime when slice count goes above 1000
* [x] Sprite slicing with `flip_x` or `flip_y` would give incorrect
results, correct flipping is now supported to both sprites and ui image
nodes thanks to @odecay observation

# GPU Alternative

I create a separate branch attempting to implementing 9 slicing and
tiling directly through the `ui.wgsl` fragment shader. It works but
requires sending more data to the GPU:
- slice border
- tiling factors

And more importantly, the actual quad *scale* which is hard to put in
the shader with the current code, so that would be for a later iteration
2024-02-07 20:07:53 +00:00
Patrick Walton
4c15dd0fc5
Implement irradiance volumes. (#10268)
# Objective

Bevy could benefit from *irradiance volumes*, also known as *voxel
global illumination* or simply as light probes (though this term is not
preferred, as multiple techniques can be called light probes).
Irradiance volumes are a form of baked global illumination; they work by
sampling the light at the centers of each voxel within a cuboid. At
runtime, the voxels surrounding the fragment center are sampled and
interpolated to produce indirect diffuse illumination.

## Solution

This is divided into two sections. The first is copied and pasted from
the irradiance volume module documentation and describes the technique.
The second part consists of notes on the implementation.

### Overview

An *irradiance volume* is a cuboid voxel region consisting of
regularly-spaced precomputed samples of diffuse indirect light. They're
ideal if you have a dynamic object such as a character that can move
about
static non-moving geometry such as a level in a game, and you want that
dynamic object to be affected by the light bouncing off that static
geometry.

To use irradiance volumes, you need to precompute, or *bake*, the
indirect
light in your scene. Bevy doesn't currently come with a way to do this.
Fortunately, [Blender] provides a [baking tool] as part of the Eevee
renderer, and its irradiance volumes are compatible with those used by
Bevy.
The [`bevy-baked-gi`] project provides a tool, `export-blender-gi`, that
can
extract the baked irradiance volumes from the Blender `.blend` file and
package them up into a `.ktx2` texture for use by the engine. See the
documentation in the `bevy-baked-gi` project for more details as to this
workflow.

Like all light probes in Bevy, irradiance volumes are 1×1×1 cubes that
can
be arbitrarily scaled, rotated, and positioned in a scene with the
[`bevy_transform::components::Transform`] component. The 3D voxel grid
will
be stretched to fill the interior of the cube, and the illumination from
the
irradiance volume will apply to all fragments within that bounding
region.

Bevy's irradiance volumes are based on Valve's [*ambient cubes*] as used
in
*Half-Life 2* ([Mitchell 2006], slide 27). These encode a single color
of
light from the six 3D cardinal directions and blend the sides together
according to the surface normal.

The primary reason for choosing ambient cubes is to match Blender, so
that
its Eevee renderer can be used for baking. However, they also have some
advantages over the common second-order spherical harmonics approach:
ambient cubes don't suffer from ringing artifacts, they are smaller (6
colors for ambient cubes as opposed to 9 for spherical harmonics), and
evaluation is faster. A smaller basis allows for a denser grid of voxels
with the same storage requirements.

If you wish to use a tool other than `export-blender-gi` to produce the
irradiance volumes, you'll need to pack the irradiance volumes in the
following format. The irradiance volume of resolution *(Rx, Ry, Rz)* is
expected to be a 3D texture of dimensions *(Rx, 2Ry, 3Rz)*. The
unnormalized
texture coordinate *(s, t, p)* of the voxel at coordinate *(x, y, z)*
with
side *S* ∈ *{-X, +X, -Y, +Y, -Z, +Z}* is as follows:

```text
s = x

t = y + ⎰  0 if S ∈ {-X, -Y, -Z}
        ⎱ Ry if S ∈ {+X, +Y, +Z}

        ⎧   0 if S ∈ {-X, +X}
p = z + ⎨  Rz if S ∈ {-Y, +Y}
        ⎩ 2Rz if S ∈ {-Z, +Z}
```

Visually, in a left-handed coordinate system with Y up, viewed from the
right, the 3D texture looks like a stacked series of voxel grids, one
for
each cube side, in this order:

| **+X** | **+Y** | **+Z** |
| ------ | ------ | ------ |
| **-X** | **-Y** | **-Z** |

A terminology note: Other engines may refer to irradiance volumes as
*voxel
global illumination*, *VXGI*, or simply as *light probes*. Sometimes
*light
probe* refers to what Bevy calls a reflection probe. In Bevy, *light
probe*
is a generic term that encompasses all cuboid bounding regions that
capture
indirect illumination, whether based on voxels or not.

Note that, if binding arrays aren't supported (e.g. on WebGPU or WebGL
2),
then only the closest irradiance volume to the view will be taken into
account during rendering.

[*ambient cubes*]:
https://advances.realtimerendering.com/s2006/Mitchell-ShadingInValvesSourceEngine.pdf

[Mitchell 2006]:
https://advances.realtimerendering.com/s2006/Mitchell-ShadingInValvesSourceEngine.pdf

[Blender]: http://blender.org/

[baking tool]:
https://docs.blender.org/manual/en/latest/render/eevee/render_settings/indirect_lighting.html

[`bevy-baked-gi`]: https://github.com/pcwalton/bevy-baked-gi

### Implementation notes

This patch generalizes light probes so as to reuse as much code as
possible between irradiance volumes and the existing reflection probes.
This approach was chosen because both techniques share numerous
similarities:

1. Both irradiance volumes and reflection probes are cuboid bounding
regions.
2. Both are responsible for providing baked indirect light.
3. Both techniques involve presenting a variable number of textures to
the shader from which indirect light is sampled. (In the current
implementation, this uses binding arrays.)
4. Both irradiance volumes and reflection probes require gathering and
sorting probes by distance on CPU.
5. Both techniques require the GPU to search through a list of bounding
regions.
6. Both will eventually want to have falloff so that we can smoothly
blend as objects enter and exit the probes' influence ranges. (This is
not implemented yet to keep this patch relatively small and reviewable.)

To do this, we generalize most of the methods in the reflection probes
patch #11366 to be generic over a trait, `LightProbeComponent`. This
trait is implemented by both `EnvironmentMapLight` (for reflection
probes) and `IrradianceVolume` (for irradiance volumes). Using a trait
will allow us to add more types of light probes in the future. In
particular, I highly suspect we will want real-time reflection planes
for mirrors in the future, which can be easily slotted into this
framework.

## Changelog

> This section is optional. If this was a trivial fix, or has no
externally-visible impact, you can delete this section.

### Added
* A new `IrradianceVolume` asset type is available for baked voxelized
light probes. You can bake the global illumination using Blender or
another tool of your choice and use it in Bevy to apply indirect
illumination to dynamic objects.
2024-02-06 23:23:20 +00:00
Marco Buono
91c467ebfc
Gate diffuse and specular transmission behind shader defs (#11627)
# Objective

- Address #10338

## Solution

- When implementing specular and diffuse transmission, I inadvertently
introduced a performance regression. On high-end hardware it is barely
noticeable, but **for lower-end hardware it can be pretty brutal**. If I
understand it correctly, this is likely due to use of masking by the GPU
to implement control flow, which means that you still pay the price for
the branches you don't take;
- To avoid that, this PR introduces new shader defs (controlled via
`StandardMaterialKey`) that conditionally include the transmission
logic, that way the shader code for both types of transmission isn't
even sent to the GPU if you're not using them;
- This PR also renames ~~`STANDARDMATERIAL_NORMAL_MAP`~~ to
`STANDARD_MATERIAL_NORMAL_MAP` for consistency with the naming
convention used elsewhere in the codebase. (Drive-by fix)

---

## Changelog

- Added new shader defs, set when using transmission in the
`StandardMaterial`:
  - `STANDARD_MATERIAL_SPECULAR_TRANSMISSION`;
  - `STANDARD_MATERIAL_DIFFUSE_TRANSMISSION`;
  - `STANDARD_MATERIAL_SPECULAR_OR_DIFFUSE_TRANSMISSION`.
- Fixed performance regression caused by the introduction of
transmission, by gating transmission shader logic behind the newly
introduced shader defs;
- Renamed ~~`STANDARDMATERIAL_NORMAL_MAP`~~ to
`STANDARD_MATERIAL_NORMAL_MAP` for consistency;

## Migration Guide

- If you were using `#ifdef STANDARDMATERIAL_NORMAL_MAP` on your shader
code, make sure to update the name to `STANDARD_MATERIAL_NORMAL_MAP`;
(with an underscore between `STANDARD` and `MATERIAL`)
2024-02-02 15:01:56 +00:00
Zachary Harrold
afa7b5cba5
Added Support for Extension-less Assets (#10153)
# Objective

- Addresses **Support processing and loading files without extensions**
from #9714
- Addresses **More runtime loading configuration** from #9714
- Fixes #367
- Fixes #10703

## Solution

`AssetServer::load::<A>` and `AssetServer::load_with_settings::<A>` can
now use the `Asset` type parameter `A` to select a registered
`AssetLoader` without inspecting the provided `AssetPath`. This change
cascades onto `LoadContext::load` and `LoadContext::load_with_settings`.
This allows the loading of assets which have incorrect or ambiguous file
extensions.

```rust
// Allow the type to be inferred by context
let handle = asset_server.load("data/asset_no_extension");

// Hint the type through the handle
let handle: Handle<CustomAsset> = asset_server.load("data/asset_no_extension");

// Explicit through turbofish
let handle = asset_server.load::<CustomAsset>("data/asset_no_extension");
```

Since a single `AssetPath` no longer maps 1:1 with an `Asset`, I've also
modified how assets are loaded to permit multiple asset types to be
loaded from a single path. This allows for two different `AssetLoaders`
(which return different types of assets) to both load a single path (if
requested).

```rust
// Uses GltfLoader
let model = asset_server.load::<Gltf>("cube.gltf");

// Hypothetical Blob loader for data transmission (for example)
let blob = asset_server.load::<Blob>("cube.gltf");
```

As these changes are reflected in the `LoadContext` as well as the
`AssetServer`, custom `AssetLoaders` can also take advantage of this
behaviour to create more complex assets.

---

## Change Log

- Updated `custom_asset` example to demonstrate extension-less assets.
- Added `AssetServer::get_handles_untyped` and Added
`AssetServer::get_path_ids`

## Notes

As a part of that refactor, I chose to store `AssetLoader`s (within
`AssetLoaders`) using a `HashMap<TypeId, ...>` instead of a `Vec<...>`.
My reasoning for this was I needed to add a relationship between `Asset`
`TypeId`s and the `AssetLoader`, so instead of having a `Vec` and a
`HashMap`, I combined the two, removing the `usize` index from the
adjacent maps.

---------

Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
2024-01-31 14:58:08 +00:00
irate
1e7e6c93e6
Fix scene example (#11289)
Since #9907 the generation starts at `1` instead of `0` so
`Entity::to_bits` now returns `4294967296` (ie. `u32::MAX + 1`) as the
lowest number instead of `0`.

Without this change scene loading fails with this error message:
`ERROR bevy_asset::server: Failed to load asset
'scenes/load_scene_example.scn.ron' with asset loader
'bevy_scene::scene_loader::SceneLoader': Could not parse RON: 8:6:
Invalid generation bits`
2024-01-22 15:14:41 +00:00
Patrick Walton
83d6600267
Implement minimal reflection probes (fixed macOS, iOS, and Android). (#11366)
This pull request re-submits #10057, which was backed out for breaking
macOS, iOS, and Android. I've tested this version on macOS and Android
and on the iOS simulator.

# Objective

This pull request implements *reflection probes*, which generalize
environment maps to allow for multiple environment maps in the same
scene, each of which has an axis-aligned bounding box. This is a
standard feature of physically-based renderers and was inspired by [the
corresponding feature in Blender's Eevee renderer].

## Solution

This is a minimal implementation of reflection probes that allows
artists to define cuboid bounding regions associated with environment
maps. For every view, on every frame, a system builds up a list of the
nearest 4 reflection probes that are within the view's frustum and
supplies that list to the shader. The PBR fragment shader searches
through the list, finds the first containing reflection probe, and uses
it for indirect lighting, falling back to the view's environment map if
none is found. Both forward and deferred renderers are fully supported.

A reflection probe is an entity with a pair of components, *LightProbe*
and *EnvironmentMapLight* (as well as the standard *SpatialBundle*, to
position it in the world). The *LightProbe* component (along with the
*Transform*) defines the bounding region, while the
*EnvironmentMapLight* component specifies the associated diffuse and
specular cubemaps.

A frequent question is "why two components instead of just one?" The
advantages of this setup are:

1. It's readily extensible to other types of light probes, in particular
*irradiance volumes* (also known as ambient cubes or voxel global
illumination), which use the same approach of bounding cuboids. With a
single component that applies to both reflection probes and irradiance
volumes, we can share the logic that implements falloff and blending
between multiple light probes between both of those features.

2. It reduces duplication between the existing *EnvironmentMapLight* and
these new reflection probes. Systems can treat environment maps attached
to cameras the same way they treat environment maps applied to
reflection probes if they wish.

Internally, we gather up all environment maps in the scene and place
them in a cubemap array. At present, this means that all environment
maps must have the same size, mipmap count, and texture format. A
warning is emitted if this restriction is violated. We could potentially
relax this in the future as part of the automatic mipmap generation
work, which could easily do texture format conversion as part of its
preprocessing.

An easy way to generate reflection probe cubemaps is to bake them in
Blender and use the `export-blender-gi` tool that's part of the
[`bevy-baked-gi`] project. This tool takes a `.blend` file containing
baked cubemaps as input and exports cubemap images, pre-filtered with an
embedded fork of the [glTF IBL Sampler], alongside a corresponding
`.scn.ron` file that the scene spawner can use to recreate the
reflection probes.

Note that this is intentionally a minimal implementation, to aid
reviewability. Known issues are:

* Reflection probes are basically unsupported on WebGL 2, because WebGL
2 has no cubemap arrays. (Strictly speaking, you can have precisely one
reflection probe in the scene if you have no other cubemaps anywhere,
but this isn't very useful.)

* Reflection probes have no falloff, so reflections will abruptly change
when objects move from one bounding region to another.

* As mentioned before, all cubemaps in the world of a given type
(diffuse or specular) must have the same size, format, and mipmap count.

Future work includes:

* Blending between multiple reflection probes.

* A falloff/fade-out region so that reflected objects disappear
gradually instead of vanishing all at once.

* Irradiance volumes for voxel-based global illumination. This should
reuse much of the reflection probe logic, as they're both GI techniques
based on cuboid bounding regions.

* Support for WebGL 2, by breaking batches when reflection probes are
used.

These issues notwithstanding, I think it's best to land this with
roughly the current set of functionality, because this patch is useful
as is and adding everything above would make the pull request
significantly larger and harder to review.

---

## Changelog

### Added

* A new *LightProbe* component is available that specifies a bounding
region that an *EnvironmentMapLight* applies to. The combination of a
*LightProbe* and an *EnvironmentMapLight* offers *reflection probe*
functionality similar to that available in other engines.

[the corresponding feature in Blender's Eevee renderer]:
https://docs.blender.org/manual/en/latest/render/eevee/light_probes/reflection_cubemaps.html

[`bevy-baked-gi`]: https://github.com/pcwalton/bevy-baked-gi

[glTF IBL Sampler]: https://github.com/KhronosGroup/glTF-IBL-Sampler
2024-01-19 07:33:52 +00:00
Félix Lescaudey de Maneville
01139b3472
Sprite slicing and tiling (#10588)
> Replaces #5213

# Objective

Implement sprite tiling and [9 slice
scaling](https://en.wikipedia.org/wiki/9-slice_scaling) for
`bevy_sprite`.
Allowing slice scaling and texture tiling.

Basic scaling vs 9 slice scaling:


![Traditional_scaling_vs_9-slice_scaling](https://user-images.githubusercontent.com/26703856/177335801-27f6fa27-c569-4ce6-b0e6-4f54e8f4e80a.svg)

Slicing example:

<img width="481" alt="Screenshot 2022-07-05 at 15 05 49"
src="https://user-images.githubusercontent.com/26703856/177336112-9e961af0-c0af-4197-aec9-430c1170a79d.png">

Tiling example:

<img width="1329" alt="Screenshot 2023-11-16 at 13 53 32"
src="https://github.com/bevyengine/bevy/assets/26703856/14db39b7-d9e0-4bc3-ba0e-b1f2db39ae8f">

# Solution

- `SpriteBundlue` now has a `scale_mode` component storing a
`SpriteScaleMode` enum with three variants:
  - `Stretched` (default) 
  - `Tiled` to have sprites tile horizontally and/or vertically
- `Sliced` allowing 9 slicing the texture and optionally tile some
sections with a `Textureslicer`.
- `bevy_sprite` has two extra systems to compute a
`ComputedTextureSlices` if necessary,:
- One system react to changes on `Sprite`, `Handle<Image>` or
`SpriteScaleMode`
- The other listens to `AssetEvent<Image>` to compute slices on sprites
when the texture is ready or changed
- I updated the `bevy_sprite` extraction stage to extract potentially
multiple textures instead of one, depending on the presence of
`ComputedTextureSlices`
- I added two examples showcasing the slicing and tiling feature.

The addition of `ComputedTextureSlices` as a cache is to avoid querying
the image data, to retrieve its dimensions, every frame in a extract or
prepare stage. Also it reacts to changes so we can have stuff like this
(tiling example):


https://github.com/bevyengine/bevy/assets/26703856/a349a9f3-33c3-471f-8ef4-a0e5dfce3b01

# Related 

- [ ] Once #5103 or #10099 is merged I can enable tiling and slicing for
texture sheets as ui

# To discuss

There is an other option, to consider slice/tiling as part of the asset,
using the new asset preprocessing but I have no clue on how to do it.

Also, instead of retrieving the Image dimensions, we could use the same
system as the sprite sheet and have the user give the image dimensions
directly (grid). But I think it's less user friendly

---------

Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
Co-authored-by: ickshonpe <david.curthoys@googlemail.com>
Co-authored-by: Alice Cecile <alice.i.cecil@gmail.com>
2024-01-15 15:40:06 +00:00
François
3d996639a0
Revert "Implement minimal reflection probes. (#10057)" (#11307)
# Objective

- Fix working on macOS, iOS, Android on main 
- Fixes #11281 
- Fixes #11282 
- Fixes #11283 
- Fixes #11299

## Solution

- Revert #10057
2024-01-12 20:41:51 +00:00
Patrick Walton
54a943d232
Implement minimal reflection probes. (#10057)
# Objective

This pull request implements *reflection probes*, which generalize
environment maps to allow for multiple environment maps in the same
scene, each of which has an axis-aligned bounding box. This is a
standard feature of physically-based renderers and was inspired by [the
corresponding feature in Blender's Eevee renderer].

## Solution

This is a minimal implementation of reflection probes that allows
artists to define cuboid bounding regions associated with environment
maps. For every view, on every frame, a system builds up a list of the
nearest 4 reflection probes that are within the view's frustum and
supplies that list to the shader. The PBR fragment shader searches
through the list, finds the first containing reflection probe, and uses
it for indirect lighting, falling back to the view's environment map if
none is found. Both forward and deferred renderers are fully supported.

A reflection probe is an entity with a pair of components, *LightProbe*
and *EnvironmentMapLight* (as well as the standard *SpatialBundle*, to
position it in the world). The *LightProbe* component (along with the
*Transform*) defines the bounding region, while the
*EnvironmentMapLight* component specifies the associated diffuse and
specular cubemaps.

A frequent question is "why two components instead of just one?" The
advantages of this setup are:

1. It's readily extensible to other types of light probes, in particular
*irradiance volumes* (also known as ambient cubes or voxel global
illumination), which use the same approach of bounding cuboids. With a
single component that applies to both reflection probes and irradiance
volumes, we can share the logic that implements falloff and blending
between multiple light probes between both of those features.

2. It reduces duplication between the existing *EnvironmentMapLight* and
these new reflection probes. Systems can treat environment maps attached
to cameras the same way they treat environment maps applied to
reflection probes if they wish.

Internally, we gather up all environment maps in the scene and place
them in a cubemap array. At present, this means that all environment
maps must have the same size, mipmap count, and texture format. A
warning is emitted if this restriction is violated. We could potentially
relax this in the future as part of the automatic mipmap generation
work, which could easily do texture format conversion as part of its
preprocessing.

An easy way to generate reflection probe cubemaps is to bake them in
Blender and use the `export-blender-gi` tool that's part of the
[`bevy-baked-gi`] project. This tool takes a `.blend` file containing
baked cubemaps as input and exports cubemap images, pre-filtered with an
embedded fork of the [glTF IBL Sampler], alongside a corresponding
`.scn.ron` file that the scene spawner can use to recreate the
reflection probes.

Note that this is intentionally a minimal implementation, to aid
reviewability. Known issues are:

* Reflection probes are basically unsupported on WebGL 2, because WebGL
2 has no cubemap arrays. (Strictly speaking, you can have precisely one
reflection probe in the scene if you have no other cubemaps anywhere,
but this isn't very useful.)

* Reflection probes have no falloff, so reflections will abruptly change
when objects move from one bounding region to another.

* As mentioned before, all cubemaps in the world of a given type
(diffuse or specular) must have the same size, format, and mipmap count.

Future work includes:

* Blending between multiple reflection probes.

* A falloff/fade-out region so that reflected objects disappear
gradually instead of vanishing all at once.

* Irradiance volumes for voxel-based global illumination. This should
reuse much of the reflection probe logic, as they're both GI techniques
based on cuboid bounding regions.

* Support for WebGL 2, by breaking batches when reflection probes are
used.

These issues notwithstanding, I think it's best to land this with
roughly the current set of functionality, because this patch is useful
as is and adding everything above would make the pull request
significantly larger and harder to review.

---

## Changelog

### Added

* A new *LightProbe* component is available that specifies a bounding
region that an *EnvironmentMapLight* applies to. The combination of a
*LightProbe* and an *EnvironmentMapLight* offers *reflection probe*
functionality similar to that available in other engines.

[the corresponding feature in Blender's Eevee renderer]:
https://docs.blender.org/manual/en/latest/render/eevee/light_probes/reflection_cubemaps.html

[`bevy-baked-gi`]: https://github.com/pcwalton/bevy-baked-gi

[glTF IBL Sampler]: https://github.com/KhronosGroup/glTF-IBL-Sampler
2024-01-08 22:09:17 +00:00
Patrick Walton
dd14f3a477
Implement lightmaps. (#10231)
![Screenshot](https://i.imgur.com/A4KzWFq.png)

# Objective

Lightmaps, textures that store baked global illumination, have been a
mainstay of real-time graphics for decades. Bevy currently has no
support for them, so this pull request implements them.

## Solution

The new `Lightmap` component can be attached to any entity that contains
a `Handle<Mesh>` and a `StandardMaterial`. When present, it will be
applied in the PBR shader. Because multiple lightmaps are frequently
packed into atlases, each lightmap may have its own UV boundaries within
its texture. An `exposure` field is also provided, to control the
brightness of the lightmap.

Note that this PR doesn't provide any way to bake the lightmaps. That
can be done with [The Lightmapper] or another solution, such as Unity's
Bakery.

---

## Changelog

### Added
* A new component, `Lightmap`, is available, for baked global
illumination. If your mesh has a second UV channel (UV1), and you attach
this component to the entity with that mesh, Bevy will apply the texture
referenced in the lightmap.

[The Lightmapper]: https://github.com/Naxela/The_Lightmapper

---------

Co-authored-by: Carter Anderson <mcanders1@gmail.com>
2024-01-02 20:38:47 +00:00
Nurzhan Sakén
8067e46049
Add example for pixel-perfect grid snapping in 2D (#8112)
# Objective

Provide an example of how to achieve pixel-perfect "grid snapping" in 2D
via rendering to a texture. This is a common use case in retro pixel art
game development.

## Solution

Render sprites to a canvas via a Camera, then use another (scaled up)
Camera to render the resulting canvas to the screen. This example is
based on the `3d/render_to_texture.rs` example. Furthermore, this
example demonstrates mixing retro-style graphics with high-resolution
graphics, as well as pixel-snapped rendering of a
`MaterialMesh2dBundle`.
2023-12-26 17:15:50 +00:00
Tygyh
0159df3856
Remove unused namespace declarations (#10965)
# Objective

- Shorten the list of namespaces in the logo.

## Solution

- Remove the unused namespace declarations.
2023-12-13 22:29:16 +00:00
davier
4d42713e77
Fix binding group in custom_material_2d.wgsl (#10841)
# Objective

Fix #10840 

## Solution

The material's binding group number was changed in #10485
2023-12-02 22:21:53 +00:00
JMS55
4bf20e7d27
Swap material and mesh bind groups (#10485)
# Objective
- Materials should be a more frequent rebind then meshes (due to being
able to use a single vertex buffer, such as in #10164) and therefore
should be in a higher bind group.

---

## Changelog
- For 2d and 3d mesh/material setups (but not UI materials, or other
rendering setups such as gizmos, sprites, or text), mesh data is now in
bind group 1, and material data is now in bind group 2, which is swapped
from how they were before.

## Migration Guide
- Custom 2d and 3d mesh/material shaders should now use bind group 2
`@group(2) @binding(x)` for their bound resources, instead of bind group
1.
- Many internal pieces of rendering code have changed so that mesh data
is now in bind group 1, and material data is now in bind group 2.
Semi-custom rendering setups (that don't use the Material or Material2d
APIs) should adapt to these changes.
2023-11-28 22:26:22 +00:00
Alex Okafor
5a4a2882c7
Fix load scene example to use proper serialization format for rotation field (#10638)
# Objective

Fixes #10479

## Solution

- Updated scene file
2023-11-21 01:14:50 +00:00
IceSentry
83a358bf33
Improve shader_material example (#10547)
# Objective

- The current shader code is misleading since it makes it look like a
struct is passed to the bind group 0 but in reality only the color is
passed. They just happen to have the exact same memory layout so wgsl
doesn't complain and it works.
- The struct is defined after the `impl Material` block which is
backwards from pretty much every other usage of the `impl` block in
bevy.

## Solution

- Remove the unnecessary struct in the shader
- move the impl block
2023-11-20 10:24:02 +00:00
Zachary Harrold
46b8e904f4
Added Method to Allow Pipelined Asset Loading (#10565)
# Objective

- Fixes #10518

## Solution

I've added a method to `LoadContext`, `load_direct_with_reader`, which
mirrors the behaviour of `load_direct` with a single key difference: it
is provided with the `Reader` by the caller, rather than getting it from
the contained `AssetServer`. This allows for an `AssetLoader` to process
its `Reader` stream, and then directly hand the results off to the
`LoadContext` to handle further loading. The outer `AssetLoader` can
control how the `Reader` is interpreted by providing a relevant
`AssetPath`.

For example, a Gzip decompression loader could process the asset
`images/my_image.png.gz` by decompressing the bytes, then handing the
decompressed result to the `LoadContext` with the new path
`images/my_image.png.gz/my_image.png`. This intuitively reflects the
nature of contained assets, whilst avoiding unintended behaviour, since
the generated path cannot be a real file path (a file and folder of the
same name cannot coexist in most file-systems).

```rust
#[derive(Asset, TypePath)]
pub struct GzAsset {
    pub uncompressed: ErasedLoadedAsset,
}

#[derive(Default)]
pub struct GzAssetLoader;

impl AssetLoader for GzAssetLoader {
    type Asset = GzAsset;
    type Settings = ();
    type Error = GzAssetLoaderError;
    fn load<'a>(
        &'a self,
        reader: &'a mut Reader,
        _settings: &'a (),
        load_context: &'a mut LoadContext,
    ) -> BoxedFuture<'a, Result<Self::Asset, Self::Error>> {
        Box::pin(async move {
            let compressed_path = load_context.path();
            let file_name = compressed_path
                .file_name()
                .ok_or(GzAssetLoaderError::IndeterminateFilePath)?
                .to_string_lossy();
            let uncompressed_file_name = file_name
                .strip_suffix(".gz")
                .ok_or(GzAssetLoaderError::IndeterminateFilePath)?;
            let contained_path = compressed_path.join(uncompressed_file_name);

            let mut bytes_compressed = Vec::new();

            reader.read_to_end(&mut bytes_compressed).await?;

            let mut decoder = GzDecoder::new(bytes_compressed.as_slice());

            let mut bytes_uncompressed = Vec::new();

            decoder.read_to_end(&mut bytes_uncompressed)?;

            // Now that we have decompressed the asset, let's pass it back to the
            // context to continue loading

            let mut reader = VecReader::new(bytes_uncompressed);

            let uncompressed = load_context
                .load_direct_with_reader(&mut reader, contained_path)
                .await?;

            Ok(GzAsset { uncompressed })
        })
    }

    fn extensions(&self) -> &[&str] {
        &["gz"]
    }
}
```

Because this example is so prudent, I've included an
`asset_decompression` example which implements this exact behaviour:

```rust
fn main() {
    App::new()
        .add_plugins(DefaultPlugins)
        .init_asset::<GzAsset>()
        .init_asset_loader::<GzAssetLoader>()
        .add_systems(Startup, setup)
        .add_systems(Update, decompress::<Image>)
        .run();
}

fn setup(mut commands: Commands, asset_server: Res<AssetServer>) {
    commands.spawn(Camera2dBundle::default());

    commands.spawn((
        Compressed::<Image> {
            compressed: asset_server.load("data/compressed_image.png.gz"),
            ..default()
        },
        Sprite::default(),
        TransformBundle::default(),
        VisibilityBundle::default(),
    ));
}

fn decompress<A: Asset>(
    mut commands: Commands,
    asset_server: Res<AssetServer>,
    mut compressed_assets: ResMut<Assets<GzAsset>>,
    query: Query<(Entity, &Compressed<A>)>,
) {
    for (entity, Compressed { compressed, .. }) in query.iter() {
        let Some(GzAsset { uncompressed }) = compressed_assets.remove(compressed) else {
            continue;
        };

        let uncompressed = uncompressed.take::<A>().unwrap();

        commands
            .entity(entity)
            .remove::<Compressed<A>>()
            .insert(asset_server.add(uncompressed));
    }
}
```

A key limitation to this design is how to type the internally loaded
asset, since the example `GzAssetLoader` is unaware of the internal
asset type `A`. As such, in this example I store the contained asset as
an `ErasedLoadedAsset`, and leave it up to the consumer of the `GzAsset`
to handle typing the final result, which is the purpose of the
`decompress` system. This limitation can be worked around by providing
type information to the `GzAssetLoader`, such as `GzAssetLoader<Image,
ImageAssetLoader>`, but this would require registering the asset loader
for every possible decompression target.

Aside from this limitation, nested asset containerisation works as an
end user would expect; if the user registers a `TarAssetLoader`, and a
`GzAssetLoader`, then they can load assets with compound
containerisation, such as `images.tar.gz`.

---

## Changelog

- Added `LoadContext::load_direct_with_reader`
- Added `asset_decompression` example

## Notes

- While I believe my implementation of a Gzip asset loader is
reasonable, I haven't included it as a public feature of `bevy_asset` to
keep the scope of this PR as focussed as possible.
- I have included `flate2` as a `dev-dependency` for the example; it is
not included in the main dependency graph.
2023-11-16 17:47:31 +00:00
IceSentry
b1aa74d511
Add shader_material_2d example (#10542)
# Objective

- 2d materials have subtle differences with 3d materials that aren't
obvious to beginners

## Solution

- Add an example that shows how to make a 2d material
2023-11-14 02:18:25 +00:00
Aevyrie
1d8d78ef0e
Update color and naming for consistency (#10367)
The `ClearColor` PR was merged before I was quite finished. This fixes a
few errors, and addresses Cart's feedback about the pixel perfect
example by updating the sprite colors to match the existing bevy bird
branding colors.


![image](https://github.com/bevyengine/bevy/assets/2632925/33722c45-ed66-4d3a-af11-f4197611a13c)
2023-11-04 02:09:23 +00:00
Markus Ort
fd232ad360
Add UI Materials (#9506)
# Objective

- Add Ui Materials so that UI can render more complex and animated
widgets.
- Fixes #5607 

## Solution
- Create a UiMaterial trait for specifying a Shader Asset and Bind Group
Layout/Data.
- Create a pipeline for rendering these Materials inside the Ui
layout/tree.
- Create a MaterialNodeBundle for simple spawning.

## Changelog

- Created a `UiMaterial` trait for specifying a Shader asset and Bind
Group.
- Created a `UiMaterialPipeline` for rendering said Materials.
- Added Example [`ui_material`
](https://github.com/MarkusTheOrt/bevy/blob/ui_material/examples/ui/ui_material.rs)
for example usage.
- Created
[`UiVertexOutput`](https://github.com/MarkusTheOrt/bevy/blob/ui_material/crates/bevy_ui/src/render/ui_vertex_output.wgsl)
export as VertexData for shaders.
- Created
[`material_ui`](https://github.com/MarkusTheOrt/bevy/blob/ui_material/crates/bevy_ui/src/render/ui_material.wgsl)
shader as default for both Vertex and Fragment shaders.

---------

Co-authored-by: ickshonpe <david.curthoys@googlemail.com>
Co-authored-by: François <mockersf@gmail.com>
2023-11-03 22:33:01 +00:00
Aevyrie
1918608b02
Update default ClearColor to better match Bevy's branding (#10339)
# Objective

- Changes the default clear color to match the code block color on
Bevy's website.

## Solution

- Changed the clear color, updated text in examples to ensure adequate
contrast. Inconsistent usage of white text color set to use the default
color instead, which is already white.
- Additionally, updated the `3d_scene` example to make it look a bit
better, and use bevy's branding colors.


![image](https://github.com/bevyengine/bevy/assets/2632925/540a22c0-826c-4c33-89aa-34905e3e313a)
2023-11-03 12:57:38 +00:00
François
6f8848a6c2
double sided normals: fix apply_normal_mapping calls (#10330)
# Objective

- After #10326, examples `array_texture`, `ssao` and `shader_prepass`
don't render correctly
```
error: failed to build a valid final module: Entry point fragment at Fragment is invalid
   ┌─ crates/bevy_pbr/src/render/pbr_prepass.wgsl:26:22
   │
26 │           let normal =  evy_pbr::pbr_functions::31mapply_normal_mapping(
   │ ╭──────────────────────^
27 │ │             bevy_pbr::pbr_bindings::material.flags,
28 │ │             world_normal,
29 │ │
   · │
36 │ │
37 │ │             bevy_pbr::mesh_view_bindings::view.mip_bias,
   │ ╰───────────────────────────────────────────────────────────────────────────────────────^ invalid function call
   │
   = Call to [9] is invalid
   = Requires 6 arguments, but 4 are provided

```

## Solution

- fix `apply_normal_mapping` calls
2023-11-01 16:40:25 +00:00
robtfm
61bad4eb57
update shader imports (#10180)
# Objective

- bump naga_oil to 0.10
- update shader imports to use rusty syntax

## Migration Guide

naga_oil 0.10 reworks the import mechanism to support more syntax to
make it more rusty, and test for item use before importing to determine
which imports are modules and which are items, which allows:

- use rust-style imports
```
#import bevy_pbr::{
    pbr_functions::{alpha_discard as discard, apply_pbr_lighting}, 
    mesh_bindings,
}
```

- import partial paths:
```
#import part::of::path
...
path::remainder::function();
```
which will call to `part::of::path::remainder::function`

- use fully qualified paths without importing:
```
// #import bevy_pbr::pbr_functions
bevy_pbr::pbr_functions::pbr()
```
- use imported items without qualifying
```
#import bevy_pbr::pbr_functions::pbr
// for backwards compatibility the old style is still supported:
// #import bevy_pbr::pbr_functions pbr
...
pbr()
```

- allows most imported items to end with `_` and numbers (naga_oil#30).
still doesn't allow struct members to end with `_` or numbers but it's
progress.

- the vast majority of existing shader code will work without changes,
but will emit "deprecated" warnings for old-style imports. these can be
suppressed with the `allow-deprecated` feature.

- partly breaks overrides (as far as i'm aware nobody uses these yet) -
now overrides will only be applied if the overriding module is added as
an additional import in the arguments to `Composer::make_naga_module` or
`Composer::add_composable_module`. this is necessary to support
determining whether imports are modules or items.
2023-10-21 11:51:58 +00:00
François
f6003c3553
array_texture example: use new name of pbr function (#10168)
# Objective

- After #7820 example `array_texture` doesn't display anything

## Solution

- Use the new name of the function in the shader
2023-10-18 01:19:28 +00:00
robtfm
c99351f7c2
allow extensions to StandardMaterial (#7820)
# Objective

allow extending `Material`s (including the built in `StandardMaterial`)
with custom vertex/fragment shaders and additional data, to easily get
pbr lighting with custom modifications, or otherwise extend a base
material.

# Solution

- added `ExtendedMaterial<B: Material, E: MaterialExtension>` which
contains a base material and a user-defined extension.
- added example `extended_material` showing how to use it
- modified AsBindGroup to have "unprepared" functions that return raw
resources / layout entries so that the extended material can combine
them

note: doesn't currently work with array resources, as i can't figure out
how to make the OwnedBindingResource::get_binding() work, as wgpu
requires a `&'a[&'a TextureView]` and i have a `Vec<TextureView>`.

# Migration Guide

manual implementations of `AsBindGroup` will need to be adjusted, the
changes are pretty straightforward and can be seen in the diff for e.g.
the `texture_binding_array` example.

---------

Co-authored-by: Robert Swain <robert.swain@gmail.com>
2023-10-17 21:28:08 +00:00
robtfm
979c4094d4
pbr shader cleanup (#10105)
# Objective

cleanup some pbr shader code. improve shader stage io consistency and
make pbr.wgsl (probably many people's first foray into bevy shader code)
a little more human-readable. also fix a couple of small issues with
deferred rendering.

## Solution

mesh_vertex_output: 
- rename to forward_io (to align with prepass_io)
- rename `MeshVertexOutput` to `VertexOutput` (to align with prepass_io)
- move `Vertex` from mesh.wgsl into here (to align with prepass_io)

prepass_io: 
- remove `FragmentInput`, use `VertexOutput` directly (to align with
forward_io)
- rename `VertexOutput::clip_position` to `position` (to align with
forward_io)

pbr.wgsl:
- restructure so we don't need `#ifdefs` on the actual entrypoint, use
VertexOutput and FragmentOutput in all cases and use #ifdefs to import
the right struct definitions.
- rearrange to make the flow clearer
- move alpha_discard up from `pbr_functions::pbr` to avoid needing to
call it on some branches and not others
- add a bunch of comments

deferred_lighting:
- move ssao into the `!unlit` block to reflect forward behaviour
correctly
- fix compile error with deferred + premultiply_alpha

## Migration Guide

in custom material shaders:
- `pbr_functions::pbr` no longer calls to
`pbr_functions::alpha_discard`. if you were using the `pbr` function in
a custom shader with alpha mask mode you now also need to call
alpha_discard manually
- rename imports of `bevy_pbr::mesh_vertex_output` to
`bevy_pbr::forward_io`
- rename instances of `MeshVertexOutput` to `VertexOutput`

in custom material prepass shaders:
- rename instances of `VertexOutput::clip_position` to
`VertexOutput::position`
2023-10-13 19:12:40 +00:00
Griffin
a15d152635
Deferred Renderer (#9258)
# Objective

- Add a [Deferred
Renderer](https://en.wikipedia.org/wiki/Deferred_shading) to Bevy.
- This allows subsequent passes to access per pixel material information
before/during shading.
- Accessing this per pixel material information is needed for some
features, like GI. It also makes other features (ex. Decals) simpler to
implement and/or improves their capability. There are multiple
approaches to accomplishing this. The deferred shading approach works
well given the limitations of WebGPU and WebGL2.

Motivation: [I'm working on a GI solution for
Bevy](https://youtu.be/eH1AkL-mwhI)

# Solution
- The deferred renderer is implemented with a prepass and a deferred
lighting pass.
- The prepass renders opaque objects into the Gbuffer attachment
(`Rgba32Uint`). The PBR shader generates a `PbrInput` in mostly the same
way as the forward implementation and then [packs it into the
Gbuffer](ec1465559f/crates/bevy_pbr/src/render/pbr.wgsl (L168)).
- The deferred lighting pass unpacks the `PbrInput` and [feeds it into
the pbr()
function](ec1465559f/crates/bevy_pbr/src/deferred/deferred_lighting.wgsl (L65)),
then outputs the shaded color data.

- There is now a resource
[DefaultOpaqueRendererMethod](ec1465559f/crates/bevy_pbr/src/material.rs (L599))
that can be used to set the default render method for opaque materials.
If materials return `None` from
[opaque_render_method()](ec1465559f/crates/bevy_pbr/src/material.rs (L131))
the `DefaultOpaqueRendererMethod` will be used. Otherwise, custom
materials can also explicitly choose to only support Deferred or Forward
by returning the respective
[OpaqueRendererMethod](ec1465559f/crates/bevy_pbr/src/material.rs (L603))

- Deferred materials can be used seamlessly along with both opaque and
transparent forward rendered materials in the same scene. The [deferred
rendering
example](https://github.com/DGriffin91/bevy/blob/deferred/examples/3d/deferred_rendering.rs)
does this.

- The deferred renderer does not support MSAA. If any deferred materials
are used, MSAA must be disabled. Both TAA and FXAA are supported.

- Deferred rendering supports WebGL2/WebGPU. 

## Custom deferred materials
- Custom materials can support both deferred and forward at the same
time. The
[StandardMaterial](ec1465559f/crates/bevy_pbr/src/render/pbr.wgsl (L166))
does this. So does [this
example](https://github.com/DGriffin91/bevy_glowy_orb_tutorial/blob/deferred/assets/shaders/glowy.wgsl#L56).
- Custom deferred materials that require PBR lighting can create a
`PbrInput`, write it to the deferred GBuffer and let it be rendered by
the `PBRDeferredLightingPlugin`.
- Custom deferred materials that require custom lighting have two
options:
1. Use the base_color channel of the `PbrInput` combined with the
`STANDARD_MATERIAL_FLAGS_UNLIT_BIT` flag.
[Example.](https://github.com/DGriffin91/bevy_glowy_orb_tutorial/blob/deferred/assets/shaders/glowy.wgsl#L56)
(If the unlit bit is set, the base_color is stored as RGB9E5 for extra
precision)
2. A Custom Deferred Lighting pass can be created, either overriding the
default, or running in addition. The a depth buffer is used to limit
rendering to only the required fragments for each deferred lighting
pass. Materials can set their respective depth id via the
[deferred_lighting_pass_id](b79182d2a3/crates/bevy_pbr/src/prepass/prepass_io.wgsl (L95))
attachment. The custom deferred lighting pass plugin can then set [its
corresponding
depth](ec1465559f/crates/bevy_pbr/src/deferred/deferred_lighting.wgsl (L37)).
Then with the lighting pass using
[CompareFunction::Equal](ec1465559f/crates/bevy_pbr/src/deferred/mod.rs (L335)),
only the fragments with a depth that equal the corresponding depth
written in the material will be rendered.

Custom deferred lighting plugins can also be created to render the
StandardMaterial. The default deferred lighting plugin can be bypassed
with `DefaultPlugins.set(PBRDeferredLightingPlugin { bypass: true })`

---------

Co-authored-by: nickrart <nickolas.g.russell@gmail.com>
2023-10-12 22:10:38 +00:00