# Objective
- Bevy now supports an opaque phase for mesh2d, but it's very common for
2d textures to have a transparent alpha channel.
## Solution
- Add an alpha mask phase identical to the one in 3d. It will do the
alpha masking in the shader before drawing the mesh.
- Uses the BinnedRenderPhase
- Since it's an opaque draw it also correctly writes to depth
## Testing
- Tested the mes2d_alpha_mode example and the bevymark example with
alpha mask mode enabled
---
## Showcase
![image](https://github.com/user-attachments/assets/9e5e4561-d0a7-4aa3-b049-d4b1247d5ed4)
The white logo on the right is rendered with alpha mask enabled.
Running the bevymark example I can get 65fps for 120k mesh2d all using
alpha mask.
## Notes
This is one more step for mesh2d improvements
https://github.com/bevyengine/bevy/issues/13265
---------
Co-authored-by: Kristoffer Søholm <k.soeholm@gmail.com>
# Objective
By default, Bevy's bloom effect shows square artifacts on small bright
particles due to a low max mip resolution. This PR makes this
configurable via BloomSettings so users can customize these parameters
instead of having them in private module constants.
## Solution
Expose max_mip_dimension and uv_offset in BloomSettings.
## Testing
I tested these changes by running the Bloom 2D / 3D examples.
---------
Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
Based on top of #12982 and #13069
# Objective
- Opaque2d was implemented with SortedRenderPhase but BinnedRenderPhase
should be much faster
## Solution
- Implement BinnedRenderPhase for Opaque2d
## Notes
While testing this PR, before the change I had ~14 fps in bevymark with
100k entities. After this change I get ~71 fps, compared to using
sprites where I only get ~63 fps. This means that after this PR mesh2d
with opaque meshes will be faster than the sprite path. This is not a 1
to 1 comparison since sprites do alpha blending.
# Objective
Closes#14526
## Solution
The history texture was being created incorrectly with the viewport size
rather than target size. When viewport < target, this meant that the
render attachments would differer in size which causes a wgpu validation
error.
## Testing
Example in linked issue works.
This PR is based on top of #12982
# Objective
- Mesh2d currently only has an alpha blended phase. Most sprites don't
need transparency though.
- For some 2d games it can be useful to have a 2d depth buffer
## Solution
- Add an opaque phase to render Mesh2d that don't need transparency
- This phase currently uses the `SortedRenderPhase` to make it easier to
implement based on the already existing transparent phase. A follow up
PR will switch this to `BinnedRenderPhase`.
- Add a 2d depth buffer
- Use that depth buffer in the transparent phase to make sure that
sprites and transparent mesh2d are displayed correctly
## Testing
I added the mesh2d_transforms example that layers many opaque and
transparent mesh2d to make sure they all get displayed correctly. I also
confirmed it works with sprites by modifying that example locally.
---
## Changelog
- Added `AlphaMode2d`
- Added `Opaque2d` render phase
- Camera2d now have a `ViewDepthTexture` component
## Migration Guide
- `ColorMaterial` now contains `AlphaMode2d`. To keep previous
behaviour, use `AlphaMode::BLEND`. If you know your sprite is opaque,
use `AlphaMode::OPAQUE`
## Follow up PRs
- See tracking issue: #13265
---------
Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
Co-authored-by: Christopher Biscardi <chris@christopherbiscardi.com>
# Objective
I can't mutate the dof settings via tools like `bevy_inspector_egui`
## Solution
Add `Reflect` for `DepthOfFieldSettings` and `DepthOfFieldMode`
# Objective
Previously, our cubic spline constructors would produce
`CubicCurve`/`RationalCurve` output with no data when they themselves
didn't hold enough control points to produce a well-formed curve.
Attempting to sample the resulting empty "curves" (e.g. by calling
`CubicCurve::position`) would crash the program (😓).
The objectives of this PR are:
1. Ensure that the curve output of `bevy_math`'s spline constructions
are never invalid as data.
2. Provide a type-level guarantee that `CubicCurve` and `RationalCurve`
actually function as curves.
## Solution
This has a few pieces. Firstly, the curve generator traits
`CubicGenerator`, `CyclicCubicGenerator`, and `RationalGenerator` are
now fallible — they have associated error types, and the
curve-generation functions are allowed to fail:
```rust
/// Implement this on cubic splines that can generate a cubic curve from their spline parameters.
pub trait CubicGenerator<P: VectorSpace> {
/// An error type indicating why construction might fail.
type Error;
/// Build a [`CubicCurve`] by computing the interpolation coefficients for each curve segment.
fn to_curve(&self) -> Result<CubicCurve<P>, Self::Error>;
}
```
All existing spline constructions use this together with errors that
indicate when they didn't have the right control data and provide curves
which have at least one segment whenever they return an `Ok` variant.
Next, `CubicCurve` and `RationalCurve` have been blessed with a
guarantee that their internal array of segments (`segments`) is never
empty. In particular, this field is no longer public, so that invalid
curves cannot be built using struct instantiation syntax. To compensate
for this shortfall for users (in particular library authors who might
want to implement their own generators), there is a new method
`from_segments` on these for constructing a curve from a list of
segments, failing if the list is empty:
```rust
/// Create a new curve from a collection of segments. If the collection of segments is empty,
/// a curve cannot be built and `None` will be returned instead.
pub fn from_segments(segments: impl Into<Vec<CubicSegment<P>>>) -> Option<Self> { //... }
```
All existing methods on `CyclicCurve` and `CubicCurve` maintain the
invariant, so the direct construction of invalid values by users is
impossible.
## Testing
Run unit tests from `bevy_math::cubic_splines`. Additionally, run the
`cubic_splines` example and try to get it to crash using small numbers
of control points: it uses the fallible constructors directly, so if
invalid data is ever constructed, it is basically guaranteed to crash.
---
## Migration Guide
The `to_curve` method on Bevy's cubic splines is now fallible (returning
a `Result`), meaning that any existing calls will need to be updated by
handling the possibility of an error variant.
Similarly, any custom implementation of `CubicGenerator` or
`RationalGenerator` will need to be amended to include an `Error` type
and be made fallible itself.
Finally, the fields of `CubicCurve` and `RationalCurve` are now private,
so any direct constructions of these structs from segments will need to
be replaced with the new `CubicCurve::from_segments` and
`RationalCurve::from_segments` methods.
---
## Design
The main thing to justify here is the choice for the curve internals to
remain the same. After all, if they were able to cause crashes in the
first place, it's worth wondering why safeguards weren't put in place on
the types themselves to prevent that.
My view on this is that the problem was really that the internals of
these methods implicitly relied on the assumption that the value they
were operating on was *actually a curve*, when this wasn't actually
guaranteed. Now, it's possible to make a bunch of small changes inside
the curve struct methods to account for that, but I think that's worse
than just guaranteeing that the data is valid upstream — sampling is
about as hot a code path as we're going to get in this area, and hitting
an additional branch every time it happens just to check that the struct
contains valid data is probably a waste of resources.
Another way of phrasing this is that even if we're only interested in
solving the crashes, the curve's validity needs to be checked at some
point, and it's almost certainly better to do this once at the point of
construction than every time the curve is sampled.
In cases where the control data is supplied dynamically, users would
already have to deal with empty curve outputs basically not working.
Anecdotally, I ran into this while writing the `cubic_splines` example,
and I think the diff illustrates the improvement pretty nicely — the
code no longer has to anticipate whether the output will be good or not;
it just has to handle the `Result`.
The cost of all this, of course, is that we have to guarantee that the
new invariant is actually maintained whenever we extend the API.
However, for the most part, I don't expect users to want to do much
surgery on the internals of their curves anyway.
# Objective
- Fix issue #2611
## Solution
- Add `--generate-link-to-definition` to all the `rustdoc-args` arrays
in the `Cargo.toml`s (for docs.rs)
- Add `--generate-link-to-definition` to the `RUSTDOCFLAGS` environment
variable in the docs workflow (for dev-docs.bevyengine.org)
- Document all the workspace crates in the docs workflow (needed because
otherwise only the source code of the `bevy` package will be included,
making the argument useless)
- I think this also fixes#3662, since it fixes the bug on
dev-docs.bevyengine.org, while on docs.rs it has been fixed for a while
on their side.
---
## Changelog
- The source code viewer on docs.rs now includes links to the
definitions.
# Objective
- Fix a confusing panic when the viewport width is non-zero and the
height is 0, `prepare_bloom_textures` tries to create a `4294967295x1`
texture.
## Solution
- Avoid dividing by zero
- Apps still crash after this, but now on a more reasonable error about
the zero-size viewport
## Testing
- I isolated and tested the math. A height of 0 sets `mip_height_ratio`
to `inf`, causing the width to explode if it isn't also 0
# Objective
- It's possible to have errors in a draw command, but these errors are
ignored
## Solution
- Return a result with the error
## Changelog
Renamed `RenderCommandResult::Failure` to `RenderCommandResult::Skip`
Added a `reason` string parameter to `RenderCommandResult::Failure`
## Migration Guide
If you were using `RenderCommandResult::Failure` to just ignore an error
and retry later, use `RenderCommandResult::Skip` instead.
This wasn't intentional, but this PR should also help with
https://github.com/bevyengine/bevy/issues/12660 since we can turn a few
unwraps into error messages now.
---------
Co-authored-by: Charlotte McElwain <charlotte.c.mcelwain@gmail.com>
Switches `Msaa` from being a globally configured resource to a per
camera view component.
Closes#7194
# Objective
Allow individual views to describe their own MSAA settings. For example,
when rendering to different windows or to different parts of the same
view.
## Solution
Make `Msaa` a component that is required on all camera bundles.
## Testing
Ran a variety of examples to ensure that nothing broke.
TODO:
- [ ] Make sure android still works per previous comment in
`extract_windows`.
---
## Migration Guide
`Msaa` is no longer configured as a global resource, and should be
specified on each spawned camera if a non-default setting is desired.
---------
Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
Co-authored-by: François Mockers <francois.mockers@vleue.com>
# Objective
- Replacing CAS with Cas in CASPlugin
- Closes#14341
## Solution
- Simple replace
---------
Co-authored-by: François Mockers <francois.mockers@vleue.com>
Co-authored-by: Jan Hohenheim <jan@hohenheim.ch>
Co-authored-by: François Mockers <mockersf@gmail.com>
# Objective
When the user renders multiple cameras to the same output texture, it
can sometimes be confusing what `ClearColorConfig` is necessary for each
camera to avoid overwriting the previous camera's output. This is
particular true in cases where the user uses mixed HDR cameras, which
means that their scene is being rendered to different internal textures.
## Solution
When a view has a configured viewport, set the GPU scissor in the
upscaling node so we don't overwrite areas that were written to by other
cameras.
## Testing
Ran the `split_screen` example.
# Objective
- Actually use the value assigned to `d_xz`, like in [the original SMAA
implementation](https://github.com/iryoku/smaa/blob/master/SMAA.hlsl#L960).
This not already being the case was likely a mistake when converting
from HLSL to WGSL
## Solution
- Use `d_xz.x` and `d_xz.y` instead of `d.x` and `d.z`
## Testing
- Quickly tested on Windows 11, `x86_64-pc-windows-gnu` `1.79.0` with
the latest NVIDIA drivers. App runs with SMAA enabled and everything
seems to work as intended
- I didn't observe any major visual difference between this and the
previous version, though this should be more correct as it matches the
original SMAA implementation
# Objective
- Fixes https://github.com/bevyengine/bevy/issues/14036
## Solution
- Add a view space transformation for the skybox
## Testing
- I have tested the newly added `transform` field using the `skybox`
example.
```
diff --git a/examples/3d/skybox.rs b/examples/3d/skybox.rs
index beaf5b268..d16cbe988 100644
--- a/examples/3d/skybox.rs
+++ b/examples/3d/skybox.rs
@@ -81,6 +81,7 @@ fn setup(mut commands: Commands, asset_server: Res<AssetServer>) {
Skybox {
image: skybox_handle.clone(),
brightness: 1000.0,
+ rotation: Quat::from_rotation_x(PI * -0.5),
},
));
```
<img width="1280" alt="image"
src="https://github.com/bevyengine/bevy/assets/6300263/1230a608-58ea-492d-a811-90c54c3b43ef">
## Migration Guide
- Since we have added a new filed to the Skybox struct, users will need
to include `..Default::default()` or some rotation value in their
initialization code.
This commit creates a new built-in postprocessing shader that's designed
to hold miscellaneous postprocessing effects, and starts it off with
chromatic aberration. Possible future effects include vignette, film
grain, and lens distortion.
[Chromatic aberration] is a common postprocessing effect that simulates
lenses that fail to focus all colors of light to a single point. It's
often used for impact effects and/or horror games. This patch uses the
technique from *Inside* ([Gjøl & Svendsen 2016]), which allows the
developer to customize the particular color pattern to achieve different
effects. Unity HDRP uses the same technique, while Unreal has a
hard-wired fixed color pattern.
A new example, `post_processing`, has been added, in order to
demonstrate the technique. The existing `post_processing` shader has
been renamed to `custom_post_processing`, for clarity.
[Chromatic aberration]:
https://en.wikipedia.org/wiki/Chromatic_aberration
[Gjøl & Svendsen 2016]:
https://github.com/playdeadgames/publications/blob/master/INSIDE/rendering_inside_gdc2016.pdf
![Screenshot 2024-06-04
180304](https://github.com/bevyengine/bevy/assets/157897/3631c64f-a615-44fe-91ca-7f04df0a54b2)
![Screenshot 2024-06-04
180743](https://github.com/bevyengine/bevy/assets/157897/ee055cbf-4314-49c5-8bfa-8d8a17bd52bb)
## Changelog
### Added
* Chromatic aberration is now available as a built-in postprocessing
effect. To use it, add `ChromaticAberration` to your camera.
# Objective
- Bevy currently has lot of invalid intra-doc links, let's fix them!
- Also make CI test them, to avoid future regressions.
- Helps with #1983 (but doesn't fix it, as there could still be explicit
links to docs.rs that are broken)
## Solution
- Make `cargo r -p ci -- doc-check` check fail on warnings (could also
be changed to just some specific lints)
- Manually fix all the warnings (note that in some cases it was unclear
to me what the fix should have been, I'll try to highlight them in a
self-review)
Bump version after release
This PR has been auto-generated
Co-authored-by: Bevy Auto Releaser <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: François Mockers <mockersf@gmail.com>
# Objective
- When no wgpu backend is selected, there should be a clear explanation.
- Fix a regression in 0.14 when not using default features. I hit this
compile failure when trying to build bevy_framepace for 0.14.0-rc.4
```
error[E0432]: unresolved import `crate::core_3d::DEPTH_TEXTURE_SAMPLING_SUPPORTED`
--> /Users/aevyrie/.cargo/registry/src/index.crates.io-6f17d22bba15001f/bevy_core_pipeline-0.14.0-rc.4/src/dof/mod.rs:59:19
|
59 | Camera3d, DEPTH_TEXTURE_SAMPLING_SUPPORTED,
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ no `DEPTH_TEXTURE_SAMPLING_SUPPORTED` in `core_3d`
|
note: found an item that was configured out
--> /Users/aevyrie/.cargo/registry/src/index.crates.io-6f17d22bba15001f/bevy_core_pipeline-0.14.0-rc.4/src/core_3d/mod.rs:53:11
|
53 | pub const DEPTH_TEXTURE_SAMPLING_SUPPORTED: bool = false;
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
note: found an item that was configured out
--> /Users/aevyrie/.cargo/registry/src/index.crates.io-6f17d22bba15001f/bevy_core_pipeline-0.14.0-rc.4/src/core_3d/mod.rs:63:11
|
63 | pub const DEPTH_TEXTURE_SAMPLING_SUPPORTED: bool = true;
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
```
## Solution
- Ensure that `DEPTH_TEXTURE_SAMPLING_SUPPORTED` is either `true` or
`false`, it shouldn't be completely missing.
## Testing
- Building on WASM without default features, which now seemingly no
longer includes webgl, will panic on startup with a message saying that
no wgpu backend was selected. This is much more helpful than the compile
time failure:
```
No wgpu backend feature that is implemented for the target platform was enabled
```
- I can see an argument for making this a compile time failure, however
the current failure mode is very confusing for novice users, and
provides no clues for how to fix it. If we want this to fail at compile
time, we should do it in a way that fails with a helpful message,
similar to what this PR acheives.
As reported in #14004, many third-party plugins, such as Hanabi, enqueue
entities that don't have meshes into render phases. However, the
introduction of indirect mode added a dependency on mesh-specific data,
breaking this workflow. This is because GPU preprocessing requires that
the render phases manage indirect draw parameters, which don't apply to
objects that aren't meshes. The existing code skips over binned entities
that don't have indirect draw parameters, which causes the rendering to
be skipped for such objects.
To support this workflow, this commit adds a new field,
`non_mesh_items`, to `BinnedRenderPhase`. This field contains a simple
list of (bin key, entity) pairs. After drawing batchable and unbatchable
objects, the non-mesh items are drawn one after another. Bevy itself
doesn't enqueue any items into this list; it exists solely for the
application and/or plugins to use.
Additionally, this commit switches the asset ID in the standard bin keys
to be an untyped asset ID rather than that of a mesh. This allows more
flexibility, allowing bins to be keyed off any type of asset.
This patch adds a new example, `custom_phase_item`, which simultaneously
serves to demonstrate how to use this new feature and to act as a
regression test so this doesn't break again.
Fixes#14004.
## Changelog
### Added
* `BinnedRenderPhase` now contains a `non_mesh_items` field for plugins
to add custom items to.
# Objective
- Fixes#13728
## Solution
- add a new feature `smaa_luts`. if enables, it also enables `ktx2` and
`zstd`. if not, it doesn't load the files but use placeholders instead
- adds all the resources needed in the same places that system that uses
them are added.
* Fixes https://github.com/bevyengine/bevy/issues/13813
* Fixes https://github.com/bevyengine/bevy/issues/13810
Tested a combined scene with both regular meshes and meshlet meshes
with:
* Regular forward setup
* Forward + normal/motion vector prepasses
* Deferred (with depth prepass since that's required)
* Deferred + depth/normal/motion vector prepasses
Still broken:
* Using meshlet meshes rendering in deferred and regular meshes
rendering in forward + depth/normal prepass. I don't know how to fix
this at the moment, so for now I've just add instructions to not mix
them.
Changes:
- Track whether an output texture has been written to yet and only clear
it on the first write.
- Use `ClearColorConfig` on `CameraOutputMode` instead of a raw
`LoadOp`.
- Track whether a output texture has been seen when specializing the
upscaling pipeline and use alpha blending for extra cameras rendering to
that texture that do not specify an explicit blend mode.
Fixes#6754
## Testing
Tested against provided test case in issue:
![image](https://github.com/bevyengine/bevy/assets/10366310/d066f069-87fb-4249-a4d9-b6cb1751971b)
---
## Changelog
- Allow cameras rendering to the same output texture with mixed hdr to
work correctly.
## Migration Guide
- - Change `CameraOutputMode` to use `ClearColorConfig` instead of
`LoadOp`.
# Objective
Skip unnecessary blit then tonemapping is set to none.
## Testing
Only tested locally on our app.
## Changelog
Changed tonemapping not to execute in case it is set to none.
Co-authored-by: Lukas Chodosevicius <lukaschodosevicius@Lukass-MacBook-Pro.local>
This commit implements a large subset of [*subpixel morphological
antialiasing*], better known as SMAA. SMAA is a 2011 antialiasing
technique that detects jaggies in an aliased image and smooths them out.
Despite its age, it's been a continual staple of games for over a
decade. Four quality presets are available: *low*, *medium*, *high*, and
*ultra*. I set the default to *high*, on account of modern GPUs being
significantly faster than they were in 2011.
Like the already-implemented FXAA, SMAA works on an unaliased image.
Unlike FXAA, it requires three passes: (1) edge detection; (2) blending
weight calculation; (3) neighborhood blending. Each of the first two
passes writes an intermediate texture for use by the next pass. The
first pass also writes to a stencil buffer in order to dramatically
reduce the number of pixels that the second pass has to examine. Also
unlike FXAA, two built-in lookup textures are required; I bundle them
into the library in compressed KTX2 format.
The [reference implementation of SMAA] is in HLSL, with abundant use of
preprocessor macros to achieve GLSL compatibility. Unfortunately, the
reference implementation predates WGSL by over a decade, so I had to
translate the HLSL to WGSL manually. As much as was reasonably possible
without sacrificing readability, I tried to translate line by line,
preserving comments, both to aid reviewing and to allow patches to the
HLSL to more easily apply to the WGSL. Most of SMAA's features are
supported, but in the interests of making this patch somewhat less huge,
I skipped a few of the more exotic ones:
* The temporal variant is currently unsupported. This is and has been
used in shipping games, so supporting temporal SMAA would be useful
follow-up work. It would, however, require some significant work on TAA
to ensure compatibility, so I opted to skip it in this patch.
* Depth- and chroma-based edge detection are unimplemented; only luma
is. Depth is lower-quality, but faster; chroma is higher-quality, but
slower. Luma is the suggested default edge detection algorithm. (Note
that depth-based edge detection wouldn't work on WebGL 2 anyway, because
of the Naga bug whereby depth sampling is miscompiled in GLSL. This is
the same bug that prevents depth of field from working on that
platform.)
* Predicated thresholding is currently unsupported.
* My implementation is incompatible with SSAA and MSAA, unlike the
original; MSAA must be turned off to use SMAA in Bevy. I believe this
feature was rarely used in practice.
The `anti_aliasing` example has been updated to allow experimentation
with and testing of the different SMAA quality presets. Along the way, I
refactored the example's help text rendering code a bit to eliminate
code repetition.
SMAA is fully supported on WebGL 2.
Fixes#9819.
[*subpixel morphological antialiasing*]: https://www.iryoku.com/smaa/
[reference implementation of SMAA]: https://github.com/iryoku/smaa
## Changelog
### Added
* Subpixel morphological antialiasing, or SMAA, is now available. To use
it, add the `SmaaSettings` component to your `Camera`.
![Screenshot 2024-05-18
134311](https://github.com/bevyengine/bevy/assets/157897/ffbd611c-1b32-4491-b2e2-e410688852ee)
---------
Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
# Objective
- Fixes#10909
- Fixes#8492
## Solution
- Name all matrices `x_from_y`, for example `world_from_view`.
## Testing
- I've tested most of the 3D examples. The `lighting` example
particularly should hit a lot of the changes and appears to run fine.
---
## Changelog
- Renamed matrices across the engine to follow a `y_from_x` naming,
making the space conversion more obvious.
## Migration Guide
- `Frustum`'s `from_view_projection`, `from_view_projection_custom_far`
and `from_view_projection_no_far` were renamed to
`from_clip_from_world`, `from_clip_from_world_custom_far` and
`from_clip_from_world_no_far`.
- `ComputedCameraValues::projection_matrix` was renamed to
`clip_from_view`.
- `CameraProjection::get_projection_matrix` was renamed to
`get_clip_from_view` (this affects implementations on `Projection`,
`PerspectiveProjection` and `OrthographicProjection`).
- `ViewRangefinder3d::from_view_matrix` was renamed to
`from_world_from_view`.
- `PreviousViewData`'s members were renamed to `view_from_world` and
`clip_from_world`.
- `ExtractedView`'s `projection`, `transform` and `view_projection` were
renamed to `clip_from_view`, `world_from_view` and `clip_from_world`.
- `ViewUniform`'s `view_proj`, `unjittered_view_proj`,
`inverse_view_proj`, `view`, `inverse_view`, `projection` and
`inverse_projection` were renamed to `clip_from_world`,
`unjittered_clip_from_world`, `world_from_clip`, `world_from_view`,
`view_from_world`, `clip_from_view` and `view_from_clip`.
- `GpuDirectionalCascade::view_projection` was renamed to
`clip_from_world`.
- `MeshTransforms`' `transform` and `previous_transform` were renamed to
`world_from_local` and `previous_world_from_local`.
- `MeshUniform`'s `transform`, `previous_transform`,
`inverse_transpose_model_a` and `inverse_transpose_model_b` were renamed
to `world_from_local`, `previous_world_from_local`,
`local_from_world_transpose_a` and `local_from_world_transpose_b` (the
`Mesh` type in WGSL mirrors this, however `transform` and
`previous_transform` were named `model` and `previous_model`).
- `Mesh2dTransforms::transform` was renamed to `world_from_local`.
- `Mesh2dUniform`'s `transform`, `inverse_transpose_model_a` and
`inverse_transpose_model_b` were renamed to `world_from_local`,
`local_from_world_transpose_a` and `local_from_world_transpose_b` (the
`Mesh2d` type in WGSL mirrors this).
- In WGSL, in `bevy_pbr::mesh_functions`, `get_model_matrix` and
`get_previous_model_matrix` were renamed to `get_world_from_local` and
`get_previous_world_from_local`.
- In WGSL, `bevy_sprite::mesh2d_functions::get_model_matrix` was renamed
to `get_world_from_local`.
# Objective
- Add motion vector support to the skybox
- This fixes the last remaining "gap" to complete the motion blur
feature
## Solution
- Add a pipeline for the skybox to write motion vectors to the prepass
## Testing
- Used examples to test motion vectors using motion blur
https://github.com/bevyengine/bevy/assets/2632925/74c0778a-7e77-4e68-8111-05791e4bfdd2
---------
Co-authored-by: Patrick Walton <pcwalton@mimiga.net>
This is a revamped equivalent to #9902, though it shares none of the
code. It handles all special cases that I've tested correctly.
The overall technique consists of double-buffering the joint matrix and
morph weights buffers, as most of the previous attempts to solve this
problem did. The process is generally straightforward. Note that, to
avoid regressing the ability of mesh extraction, skin extraction, and
morph target extraction to run in parallel, I had to add a new system to
rendering, `set_mesh_motion_vector_flags`. The comment there explains
the details; it generally runs very quickly.
I've tested this with modified versions of the `animated_fox`,
`morph_targets`, and `many_foxes` examples that add TAA, and the patch
works. To avoid bloating those examples, I didn't add switches for TAA
to them.
Addresses points (1) and (2) of #8423.
## Changelog
### Fixed
* Motion vectors, and therefore TAA, are now supported for meshes with
skins and/or morph targets.
# Objective
- Motion blur does not currently work with skyboxes or anything else
that does not write to depth.
## Solution
- When computing blur, include fragments with no depth, as long as they
are onscreen.
## Testing
- Tested with the examples - the motion_blur example uncovered a bug
with this fix, where offscreen pixels where now being sampled and
causing artifacts.
- Attached example showing the skybox being sampled in the blur (note
the feathering on edges):
https://github.com/bevyengine/bevy/assets/2632925/fc14b0c1-2394-46a5-a2b9-a859efcd23ef
# Objective
- In particularly dark scenes, auto-exposure would lead to an unexpected
darkening of the view.
- Fixes#13446.
## Solution
The average luminance should default to something else than 0.0 instead,
when there are no samples. We set it to `settings.min_log_lum`.
## Testing
I was able to reproduce the problem on the `auto_exposure` example by
setting the point light intensity to 2000 and looking into the
right-hand corner. There was a sudden darkening.
Now, the discontinuity is gone.
---------
Co-authored-by: Alice Cecile <alice.i.cecil@gmail.com>
Co-authored-by: Bram Buurlage <brambuurlage@gmail.com>
Fixes#13118
If you use `Sprite` or `Mesh2d` and create `Camera` with
* hdr=false
* any tonemapper
You would get
```
wgpu error: Validation Error
Caused by:
In Device::create_render_pipeline
note: label = `sprite_pipeline`
Error matching ShaderStages(FRAGMENT) shader requirements against the pipeline
Shader global ResourceBinding { group: 0, binding: 19 } is not available in the pipeline layout
Binding is missing from the pipeline layout
```
Because of missing tonemapping LUT bindings
## Solution
Add missing bindings for tonemapping LUT's to `SpritePipeline` &
`Mesh2dPipeline`
## Testing
I checked that
* `tonemapping`
* `color_grading`
* `sprite_animations`
* `2d_shapes`
* `meshlet`
* `deferred_rendering`
examples are still working
2d cases I checked with this code:
```
use bevy::{
color::palettes::css::PURPLE, core_pipeline::tonemapping::Tonemapping, prelude::*,
sprite::MaterialMesh2dBundle,
};
fn main() {
App::new()
.add_plugins(DefaultPlugins)
.add_systems(Startup, setup)
.add_systems(Update, toggle_tonemapping_method)
.run();
}
fn setup(
mut commands: Commands,
mut meshes: ResMut<Assets<Mesh>>,
mut materials: ResMut<Assets<ColorMaterial>>,
asset_server: Res<AssetServer>,
) {
commands.spawn(Camera2dBundle {
camera: Camera {
hdr: false,
..default()
},
tonemapping: Tonemapping::BlenderFilmic,
..default()
});
commands.spawn(MaterialMesh2dBundle {
mesh: meshes.add(Rectangle::default()).into(),
transform: Transform::default().with_scale(Vec3::splat(128.)),
material: materials.add(Color::from(PURPLE)),
..default()
});
commands.spawn(SpriteBundle {
texture: asset_server.load("asd.png"),
..default()
});
}
fn toggle_tonemapping_method(
keys: Res<ButtonInput<KeyCode>>,
mut tonemapping: Query<&mut Tonemapping>,
) {
let mut method = tonemapping.single_mut();
if keys.just_pressed(KeyCode::Digit1) {
*method = Tonemapping::None;
} else if keys.just_pressed(KeyCode::Digit2) {
*method = Tonemapping::Reinhard;
} else if keys.just_pressed(KeyCode::Digit3) {
*method = Tonemapping::ReinhardLuminance;
} else if keys.just_pressed(KeyCode::Digit4) {
*method = Tonemapping::AcesFitted;
} else if keys.just_pressed(KeyCode::Digit5) {
*method = Tonemapping::AgX;
} else if keys.just_pressed(KeyCode::Digit6) {
*method = Tonemapping::SomewhatBoringDisplayTransform;
} else if keys.just_pressed(KeyCode::Digit7) {
*method = Tonemapping::TonyMcMapface;
} else if keys.just_pressed(KeyCode::Digit8) {
*method = Tonemapping::BlenderFilmic;
}
}
```
---
## Changelog
Fix the bug which led to the crash when user uses any tonemapper without
hdr for rendering sprites and 2d meshes.
This commit, a revamp of #12959, implements screen-space reflections
(SSR), which approximate real-time reflections based on raymarching
through the depth buffer and copying samples from the final rendered
frame. This patch is a relatively minimal implementation of SSR, so as
to provide a flexible base on which to customize and build in the
future. However, it's based on the production-quality [raymarching code
by Tomasz
Stachowiak](https://gist.github.com/h3r2tic/9c8356bdaefbe80b1a22ae0aaee192db).
For a general basic overview of screen-space reflections, see
[1](https://lettier.github.io/3d-game-shaders-for-beginners/screen-space-reflection.html).
The raymarching shader uses the basic algorithm of tracing forward in
large steps, refining that trace in smaller increments via binary
search, and then using the secant method. No temporal filtering or
roughness blurring, is performed at all; for this reason, SSR currently
only operates on very shiny surfaces. No acceleration via the
hierarchical Z-buffer is implemented (though note that
https://github.com/bevyengine/bevy/pull/12899 will add the
infrastructure for this). Reflections are traced at full resolution,
which is often considered slow. All of these improvements and more can
be follow-ups.
SSR is built on top of the deferred renderer and is currently only
supported in that mode. Forward screen-space reflections are possible
albeit uncommon (though e.g. *Doom Eternal* uses them); however, they
require tracing from the previous frame, which would add complexity.
This patch leaves the door open to implementing SSR in the forward
rendering path but doesn't itself have such an implementation.
Screen-space reflections aren't supported in WebGL 2, because they
require sampling from the depth buffer, which Naga can't do because of a
bug (`sampler2DShadow` is incorrectly generated instead of `sampler2D`;
this is the same reason why depth of field is disabled on that
platform).
To add screen-space reflections to a camera, use the
`ScreenSpaceReflectionsBundle` bundle or the
`ScreenSpaceReflectionsSettings` component. In addition to
`ScreenSpaceReflectionsSettings`, `DepthPrepass` and `DeferredPrepass`
must also be present for the reflections to show up. The
`ScreenSpaceReflectionsSettings` component contains several settings
that artists can tweak, and also comes with sensible defaults.
A new example, `ssr`, has been added. It's loosely based on the
[three.js ocean
sample](https://threejs.org/examples/webgl_shaders_ocean.html), but all
the assets are original. Note that the three.js demo has no screen-space
reflections and instead renders a mirror world. In contrast to #12959,
this demo tests not only a cube but also a more complex model (the
flight helmet).
## Changelog
### Added
* Screen-space reflections can be enabled for very smooth surfaces by
adding the `ScreenSpaceReflections` component to a camera. Deferred
rendering must be enabled for the reflections to appear.
![Screenshot 2024-05-18
143555](https://github.com/bevyengine/bevy/assets/157897/b8675b39-8a89-433e-a34e-1b9ee1233267)
![Screenshot 2024-05-18
143606](https://github.com/bevyengine/bevy/assets/157897/cc9e1cd0-9951-464a-9a08-e589210e5606)
Added a Grey trait to allow colors to create a generic "grey" color.
This currently assumes the color spaces follow the same gradient, which
I'm pretty sure isn't true, but it should make a "grey-ish" color
relative to the provided intensity.
# Objective
- Implements #13206
## Solution
- A small `Grey` trait was added and implemented for the common color
kinds.
## Testing
- Currently untested, unit tests exposed the non-linear relation between
colors. I am debating adding an example to show this, as I have no idea
what color space represents what relation of grey, and I figure others
may be similarly confused.
## Changelog
- The `Grey` trait was added, and the corresponding `grey`
## BREAKING CHANGES
The const qualifier for LinearRGBA::gray was removed (the symbol still
exists via a trait, it's just not const anymore)
This commit makes us stop using the render world ECS for
`BinnedRenderPhase` and `SortedRenderPhase` and instead use resources
with `EntityHashMap`s inside. There are three reasons to do this:
1. We can use `clear()` to clear out the render phase collections
instead of recreating the components from scratch, allowing us to reuse
allocations.
2. This is a prerequisite for retained bins, because components can't be
retained from frame to frame in the render world, but resources can.
3. We want to move away from storing anything in components in the
render world ECS, and this is a step in that direction.
This patch results in a small performance benefit, due to point (1)
above.
## Changelog
### Changed
* The `BinnedRenderPhase` and `SortedRenderPhase` render world
components have been replaced with `ViewBinnedRenderPhases` and
`ViewSortedRenderPhases` resources.
## Migration Guide
* The `BinnedRenderPhase` and `SortedRenderPhase` render world
components have been replaced with `ViewBinnedRenderPhases` and
`ViewSortedRenderPhases` resources. Instead of querying for the
components, look the camera entity up in the
`ViewBinnedRenderPhases`/`ViewSortedRenderPhases` tables.
This commit implements a more physically-accurate, but slower, form of
fog than the `bevy_pbr::fog` module does. Notably, this *volumetric fog*
allows for light beams from directional lights to shine through,
creating what is known as *light shafts* or *god rays*.
To add volumetric fog to a scene, add `VolumetricFogSettings` to the
camera, and add `VolumetricLight` to directional lights that you wish to
be volumetric. `VolumetricFogSettings` has numerous settings that allow
you to define the accuracy of the simulation, as well as the look of the
fog. Currently, only interaction with directional lights that have
shadow maps is supported. Note that the overhead of the effect scales
directly with the number of directional lights in use, so apply
`VolumetricLight` sparingly for the best results.
The overall algorithm, which is implemented as a postprocessing effect,
is a combination of the techniques described in [Scratchapixel] and
[this blog post]. It uses raymarching in screen space, transformed into
shadow map space for sampling and combined with physically-based
modeling of absorption and scattering. Bevy employs the widely-used
[Henyey-Greenstein phase function] to model asymmetry; this essentially
allows light shafts to fade into and out of existence as the user views
them.
Volumetric rendering is a huge subject, and I deliberately kept the
scope of this commit small. Possible follow-ups include:
1. Raymarching at a lower resolution.
2. A post-processing blur (especially useful when combined with (1)).
3. Supporting point lights and spot lights.
4. Supporting lights with no shadow maps.
5. Supporting irradiance volumes and reflection probes.
6. Voxel components that reuse the volumetric fog code to create voxel
shapes.
7. *Horizon: Zero Dawn*-style clouds.
These are all useful, but out of scope of this patch for now, to keep
things tidy and easy to review.
A new example, `volumetric_fog`, has been added to demonstrate the
effect.
## Changelog
### Added
* A new component, `VolumetricFog`, is available, to allow for a more
physically-accurate, but more resource-intensive, form of fog.
* A new component, `VolumetricLight`, can be placed on directional
lights to make them interact with `VolumetricFog`. Notably, this allows
such lights to emit light shafts/god rays.
![Screenshot 2024-04-21
162808](https://github.com/bevyengine/bevy/assets/157897/7a1fc81d-eed5-4735-9419-286c496391a9)
![Screenshot 2024-04-21
132005](https://github.com/bevyengine/bevy/assets/157897/e6d3b5ca-8f59-488d-a3de-15e95aaf4995)
[Scratchapixel]:
https://www.scratchapixel.com/lessons/3d-basic-rendering/volume-rendering-for-developers/intro-volume-rendering.html
[this blog post]: https://www.alexandre-pestana.com/volumetric-lights/
[Henyey-Greenstein phase function]:
https://www.pbr-book.org/4ed/Volume_Scattering/Phase_Functions#TheHenyeyndashGreensteinPhaseFunction
# Objective
- Depth of field is currently disabled on any wasm targets, but the bug
it's trying to avoid is only an issue in webgl.
## Solution
- Enable dof when compiling for webgpu
- I also remove the msaa check because sampling a depth texture doesn't
work with or without msaa in webgl
- Unfortunately, Bokeh seems to be broken when using webgpu, so default
to Gaussian instead to make sure the defaults have the broadest platform
support
## Testing
- I added dof to the 3d_shapes example and compiled it to webgpu to
confirm it works
- I also tried compiling to webgl to confirm things still works and dof
isn't rendered.
---------
Co-authored-by: James Liu <contact@jamessliu.com>
# Objective
- Fixes#13377
- Fixes https://github.com/bevyengine/bevy/issues/13383
## Solution
- Even if the number of renderables is empty, the transparent phase need
to run to set the clear color.
## Testing
- Tested on the `clear_color` example
This commit implements the [depth of field] effect, simulating the blur
of objects out of focus of the virtual lens. Either the [hexagonal
bokeh] effect or a faster Gaussian blur may be used. In both cases, the
implementation is a simple separable two-pass convolution. This is not
the most physically-accurate real-time bokeh technique that exists;
Unreal Engine has [a more accurate implementation] of "cinematic depth
of field" from 2018. However, it's simple, and most engines provide
something similar as a fast option, often called "mobile" depth of
field.
The general approach is outlined in [a blog post from 2017]. We take
advantage of the fact that both Gaussian blurs and hexagonal bokeh blurs
are *separable*. This means that their 2D kernels can be reduced to a
small number of 1D kernels applied one after another, asymptotically
reducing the amount of work that has to be done. Gaussian blurs can be
accomplished by blurring horizontally and then vertically, while
hexagonal bokeh blurs can be done with a vertical blur plus a diagonal
blur, plus two diagonal blurs. In both cases, only two passes are
needed. Bokeh requires the first pass to have a second render target and
requires two subpasses in the second pass, which decreases its
performance relative to the Gaussian blur.
The bokeh blur is generally more aesthetically pleasing than the
Gaussian blur, as it simulates the effect of a camera more accurately.
The shape of the bokeh circles are determined by the number of blades of
the aperture. In our case, we use a hexagon, which is usually considered
specific to lower-quality cameras. (This is a downside of the fast
hexagon approach compared to the higher-quality approaches.) The blur
amount is generally specified by the [f-number], which we use to compute
the focal length from the film size and FOV. By default, we simulate
standard cinematic cameras of f/1 and [Super 35]. The developer can
customize these values as desired.
A new example has been added to demonstrate depth of field. It allows
customization of the mode (Gaussian vs. bokeh), focal distance and
f-numbers. The test scene is inspired by a [blog post on depth of field
in Unity]; however, the effect is implemented in a completely different
way from that blog post, and all the assets (textures, etc.) are
original.
Bokeh depth of field:
![Screenshot 2024-04-17
152535](https://github.com/bevyengine/bevy/assets/157897/702f0008-1c8a-4cf3-b077-4110f8c46584)
Gaussian depth of field:
![Screenshot 2024-04-17
152542](https://github.com/bevyengine/bevy/assets/157897/f4ece47a-520e-4483-a92d-f4fa760795d3)
No depth of field:
![Screenshot 2024-04-17
152547](https://github.com/bevyengine/bevy/assets/157897/9444e6aa-fcae-446c-b66b-89469f1a1325)
[depth of field]: https://en.wikipedia.org/wiki/Depth_of_field
[hexagonal bokeh]:
https://colinbarrebrisebois.com/2017/04/18/hexagonal-bokeh-blur-revisited/
[a more accurate implementation]:
https://epicgames.ent.box.com/s/s86j70iamxvsuu6j35pilypficznec04
[a blog post from 2017]:
https://colinbarrebrisebois.com/2017/04/18/hexagonal-bokeh-blur-revisited/
[f-number]: https://en.wikipedia.org/wiki/F-number
[Super 35]: https://en.wikipedia.org/wiki/Super_35
[blog post on depth of field in Unity]:
https://catlikecoding.com/unity/tutorials/advanced-rendering/depth-of-field/
## Changelog
### Added
* A depth of field postprocessing effect is now available, to simulate
objects being out of focus of the camera. To use it, add
`DepthOfFieldSettings` to an entity containing a `Camera3d` component.
---------
Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
Co-authored-by: Bram Buurlage <brambuurlage@gmail.com>
This commit fixes two issues in auto_exposure.wgsl:
* A `storageBarrier()` was incorrectly used where a `workgroupBarrier()`
should be used instead;
* Resetting the `histogram_shared` array would write beyond the 64th
index, which is out of bounds.
## Solution
The first issue is fixed by using the appropriate workgroupBarrier
instead;
The second issue is fixed by adding a range check before setting
`histogram_shared[local_invocation_index] = 0u`.
## Testing
These changes were tested using the Xcode metal profiler, and I could
not find any noticable change in compute shader performance.
# Objective
Fixes#13097 and other issues preventing the motion blur example from
working on wasm
## Solution
- Use a vec2 for padding
- Fix error initializing the `MotionBlur` struct on wasm+webgl2
- Disable MSAA on wasm+webgl2
- Fix `GlobalsUniform` padding getting added on the shader side for
webgpu builds
## Notes
The motion blur example now runs, but with artifacts. In addition to the
obvious black artifacts, the motion blur or dithering seem to just look
worse in a way I can't really describe. That may be expected.
```
AdapterInfo { name: "ANGLE (Apple, ANGLE Metal Renderer: Apple M1 Max, Unspecified Version)", vendor: 4203, device: 0, device_type: IntegratedGpu, driver: "", driver_info: "", backend: Gl }
```
<img width="1276" alt="Screenshot 2024-04-25 at 6 51 21 AM"
src="https://github.com/bevyengine/bevy/assets/200550/65401d4f-92fe-454b-9dbc-a2d89d3ad963">
# Objective
Currently, the 2d pipeline only has a transparent pass that is used for
everything. I want to have separate passes for opaque/alpha
mask/transparent meshes just like in 3d.
This PR does the basic work to start adding new phases to the 2d
pipeline and get the current setup a bit closer to 3d.
## Solution
- Use `ViewNode` for `MainTransparentPass2dNode`
- Added `Node2d::StartMainPass`, `Node2d::EndMainPass`
- Rename everything to clarify that the main pass is currently the
transparent pass
---
## Changelog
- Added `Node2d::StartMainPass`, `Node2d::EndMainPass`
## Migration Guide
If you were using `Node2d::MainPass` to order your own custom render
node. You now need to order it relative to `Node2d::StartMainPass` or
`Node2d::EndMainPass`.
# Objective
`bevy_pbr/utils.wgsl` shader file contains mathematical constants and
color conversion functions. Both of those should be accessible without
enabling `bevy_pbr` feature. For example, tonemapping can be done in non
pbr scenario, and it uses color conversion functions.
Fixes#13207
## Solution
* Move mathematical constants (such as PI, E) from
`bevy_pbr/src/render/utils.wgsl` into `bevy_render/src/maths.wgsl`
* Move color conversion functions from `bevy_pbr/src/render/utils.wgsl`
into new file `bevy_render/src/color_operations.wgsl`
## Testing
Ran multiple examples, checked they are working:
* tonemapping
* color_grading
* 3d_scene
* animated_material
* deferred_rendering
* 3d_shapes
* fog
* irradiance_volumes
* meshlet
* parallax_mapping
* pbr
* reflection_probes
* shadow_biases
* 2d_gizmos
* light_gizmos
---
## Changelog
* Moved mathematical constants (such as PI, E) from
`bevy_pbr/src/render/utils.wgsl` into `bevy_render/src/maths.wgsl`
* Moved color conversion functions from `bevy_pbr/src/render/utils.wgsl`
into new file `bevy_render/src/color_operations.wgsl`
## Migration Guide
In user's shader code replace usage of mathematical constants from
`bevy_pbr::utils` to the usage of the same constants from
`bevy_render::maths`.