Commit graph

260 commits

Author SHA1 Message Date
charlotte
027f8e21ec
Allow mix of hdr and non-hdr cameras to same render target (#13419)
Changes:
- Track whether an output texture has been written to yet and only clear
it on the first write.
- Use `ClearColorConfig` on `CameraOutputMode` instead of a raw
`LoadOp`.
- Track whether a output texture has been seen when specializing the
upscaling pipeline and use alpha blending for extra cameras rendering to
that texture that do not specify an explicit blend mode.

Fixes #6754

## Testing

Tested against provided test case in issue:

![image](https://github.com/bevyengine/bevy/assets/10366310/d066f069-87fb-4249-a4d9-b6cb1751971b)

---

## Changelog

- Allow cameras rendering to the same output texture with mixed hdr to
work correctly.

## Migration Guide

- - Change `CameraOutputMode` to use `ClearColorConfig` instead of
`LoadOp`.
2024-06-06 20:55:05 +00:00
Lukas Chodosevičius
2b20af6b79
Skip tonemapping in case it is none (#13679)
# Objective
Skip unnecessary blit then tonemapping is set to none.

## Testing
Only tested locally on our app.

## Changelog

Changed tonemapping not to execute in case it is set to none.

Co-authored-by: Lukas Chodosevicius <lukaschodosevicius@Lukass-MacBook-Pro.local>
2024-06-05 11:32:46 +00:00
Patrick Walton
ace4c6023b
Implement subpixel morphological antialiasing, or SMAA. (#13423)
This commit implements a large subset of [*subpixel morphological
antialiasing*], better known as SMAA. SMAA is a 2011 antialiasing
technique that detects jaggies in an aliased image and smooths them out.
Despite its age, it's been a continual staple of games for over a
decade. Four quality presets are available: *low*, *medium*, *high*, and
*ultra*. I set the default to *high*, on account of modern GPUs being
significantly faster than they were in 2011.

Like the already-implemented FXAA, SMAA works on an unaliased image.
Unlike FXAA, it requires three passes: (1) edge detection; (2) blending
weight calculation; (3) neighborhood blending. Each of the first two
passes writes an intermediate texture for use by the next pass. The
first pass also writes to a stencil buffer in order to dramatically
reduce the number of pixels that the second pass has to examine. Also
unlike FXAA, two built-in lookup textures are required; I bundle them
into the library in compressed KTX2 format.

The [reference implementation of SMAA] is in HLSL, with abundant use of
preprocessor macros to achieve GLSL compatibility. Unfortunately, the
reference implementation predates WGSL by over a decade, so I had to
translate the HLSL to WGSL manually. As much as was reasonably possible
without sacrificing readability, I tried to translate line by line,
preserving comments, both to aid reviewing and to allow patches to the
HLSL to more easily apply to the WGSL. Most of SMAA's features are
supported, but in the interests of making this patch somewhat less huge,
I skipped a few of the more exotic ones:

* The temporal variant is currently unsupported. This is and has been
used in shipping games, so supporting temporal SMAA would be useful
follow-up work. It would, however, require some significant work on TAA
to ensure compatibility, so I opted to skip it in this patch.

* Depth- and chroma-based edge detection are unimplemented; only luma
is. Depth is lower-quality, but faster; chroma is higher-quality, but
slower. Luma is the suggested default edge detection algorithm. (Note
that depth-based edge detection wouldn't work on WebGL 2 anyway, because
of the Naga bug whereby depth sampling is miscompiled in GLSL. This is
the same bug that prevents depth of field from working on that
platform.)

* Predicated thresholding is currently unsupported.

* My implementation is incompatible with SSAA and MSAA, unlike the
original; MSAA must be turned off to use SMAA in Bevy. I believe this
feature was rarely used in practice.

The `anti_aliasing` example has been updated to allow experimentation
with and testing of the different SMAA quality presets. Along the way, I
refactored the example's help text rendering code a bit to eliminate
code repetition.

SMAA is fully supported on WebGL 2.

Fixes #9819.

[*subpixel morphological antialiasing*]: https://www.iryoku.com/smaa/

[reference implementation of SMAA]: https://github.com/iryoku/smaa

## Changelog

### Added

* Subpixel morphological antialiasing, or SMAA, is now available. To use
it, add the `SmaaSettings` component to your `Camera`.

![Screenshot 2024-05-18
134311](https://github.com/bevyengine/bevy/assets/157897/ffbd611c-1b32-4491-b2e2-e410688852ee)

---------

Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
2024-06-04 17:07:34 +00:00
Ricky Taylor
9b9d3d81cb
Normalise matrix naming (#13489)
# Objective
- Fixes #10909
- Fixes #8492

## Solution
- Name all matrices `x_from_y`, for example `world_from_view`.

## Testing
- I've tested most of the 3D examples. The `lighting` example
particularly should hit a lot of the changes and appears to run fine.

---

## Changelog
- Renamed matrices across the engine to follow a `y_from_x` naming,
making the space conversion more obvious.

## Migration Guide
- `Frustum`'s `from_view_projection`, `from_view_projection_custom_far`
and `from_view_projection_no_far` were renamed to
`from_clip_from_world`, `from_clip_from_world_custom_far` and
`from_clip_from_world_no_far`.
- `ComputedCameraValues::projection_matrix` was renamed to
`clip_from_view`.
- `CameraProjection::get_projection_matrix` was renamed to
`get_clip_from_view` (this affects implementations on `Projection`,
`PerspectiveProjection` and `OrthographicProjection`).
- `ViewRangefinder3d::from_view_matrix` was renamed to
`from_world_from_view`.
- `PreviousViewData`'s members were renamed to `view_from_world` and
`clip_from_world`.
- `ExtractedView`'s `projection`, `transform` and `view_projection` were
renamed to `clip_from_view`, `world_from_view` and `clip_from_world`.
- `ViewUniform`'s `view_proj`, `unjittered_view_proj`,
`inverse_view_proj`, `view`, `inverse_view`, `projection` and
`inverse_projection` were renamed to `clip_from_world`,
`unjittered_clip_from_world`, `world_from_clip`, `world_from_view`,
`view_from_world`, `clip_from_view` and `view_from_clip`.
- `GpuDirectionalCascade::view_projection` was renamed to
`clip_from_world`.
- `MeshTransforms`' `transform` and `previous_transform` were renamed to
`world_from_local` and `previous_world_from_local`.
- `MeshUniform`'s `transform`, `previous_transform`,
`inverse_transpose_model_a` and `inverse_transpose_model_b` were renamed
to `world_from_local`, `previous_world_from_local`,
`local_from_world_transpose_a` and `local_from_world_transpose_b` (the
`Mesh` type in WGSL mirrors this, however `transform` and
`previous_transform` were named `model` and `previous_model`).
- `Mesh2dTransforms::transform` was renamed to `world_from_local`.
- `Mesh2dUniform`'s `transform`, `inverse_transpose_model_a` and
`inverse_transpose_model_b` were renamed to `world_from_local`,
`local_from_world_transpose_a` and `local_from_world_transpose_b` (the
`Mesh2d` type in WGSL mirrors this).
- In WGSL, in `bevy_pbr::mesh_functions`, `get_model_matrix` and
`get_previous_model_matrix` were renamed to `get_world_from_local` and
`get_previous_world_from_local`.
- In WGSL, `bevy_sprite::mesh2d_functions::get_model_matrix` was renamed
to `get_world_from_local`.
2024-06-03 16:56:53 +00:00
Aevyrie
b45786df41
Add Skybox Motion Vectors (#13617)
# Objective

- Add motion vector support to the skybox
- This fixes the last remaining "gap" to complete the motion blur
feature

## Solution

- Add a pipeline for the skybox to write motion vectors to the prepass

## Testing

- Used examples to test motion vectors using motion blur


https://github.com/bevyengine/bevy/assets/2632925/74c0778a-7e77-4e68-8111-05791e4bfdd2

---------

Co-authored-by: Patrick Walton <pcwalton@mimiga.net>
2024-06-02 16:09:28 +00:00
Patrick Walton
be053b1d7c
Implement motion vectors and TAA for skinned meshes and meshes with morph targets. (#13572)
This is a revamped equivalent to #9902, though it shares none of the
code. It handles all special cases that I've tested correctly.

The overall technique consists of double-buffering the joint matrix and
morph weights buffers, as most of the previous attempts to solve this
problem did. The process is generally straightforward. Note that, to
avoid regressing the ability of mesh extraction, skin extraction, and
morph target extraction to run in parallel, I had to add a new system to
rendering, `set_mesh_motion_vector_flags`. The comment there explains
the details; it generally runs very quickly.

I've tested this with modified versions of the `animated_fox`,
`morph_targets`, and `many_foxes` examples that add TAA, and the patch
works. To avoid bloating those examples, I didn't add switches for TAA
to them.

Addresses points (1) and (2) of #8423.

## Changelog

### Fixed

* Motion vectors, and therefore TAA, are now supported for meshes with
skins and/or morph targets.
2024-05-31 17:02:28 +00:00
Aevyrie
16fe7e64cc
Fix: Motion blur should sample onscreen fragments with no depth (#13573)
# Objective

- Motion blur does not currently work with skyboxes or anything else
that does not write to depth.

## Solution

- When computing blur, include fragments with no depth, as long as they
are onscreen.

## Testing

- Tested with the examples - the motion_blur example uncovered a bug
with this fix, where offscreen pixels where now being sampled and
causing artifacts.
- Attached example showing the skybox being sampled in the blur (note
the feathering on edges):



https://github.com/bevyengine/bevy/assets/2632925/fc14b0c1-2394-46a5-a2b9-a859efcd23ef
2024-05-30 13:52:47 +00:00
Aevyrie
c566ae7155
Run motion blur before TAA to reduce artifacts (#13574)
# Objective

- Reduce edge artifacts and noise in motion blur with TAA.

## Solution

- Reorder's motion blur and TAA, so TAA is run after motion blur.

## Testing

- Tested with built in examples, as well as some external test scenes.

Before:


![image](https://github.com/bevyengine/bevy/assets/2632925/5522b749-9235-4b11-b560-c35350ab4b92)


![image](https://github.com/bevyengine/bevy/assets/2632925/e675aa0d-de0d-4833-9c33-ba7b3cd79955)


After:


![image](https://github.com/bevyengine/bevy/assets/2632925/97261093-1b8e-41ab-840f-f999a4e15a6d)


![image](https://github.com/bevyengine/bevy/assets/2632925/70215d8f-2ec7-4835-9e2d-ccead8972a5e)
2024-05-30 13:52:02 +00:00
Alice Cecile
9d74e16821
Set the default target exposure to the minimum value, not 0 (#13562)
# Objective

- In particularly dark scenes, auto-exposure would lead to an unexpected
darkening of the view.
- Fixes #13446.

## Solution

The average luminance should default to something else than 0.0 instead,
when there are no samples. We set it to `settings.min_log_lum`.

## Testing

I was able to reproduce the problem on the `auto_exposure` example by
setting the point light intensity to 2000 and looking into the
right-hand corner. There was a sudden darkening.

Now, the discontinuity is gone.

---------

Co-authored-by: Alice Cecile <alice.i.cecil@gmail.com>
Co-authored-by: Bram Buurlage <brambuurlage@gmail.com>
2024-05-29 22:37:42 +00:00
arcashka
cdc605cc48
add tonemapping LUT bindings for sprite and mesh2d pipelines (#13262)
Fixes #13118
If you use `Sprite` or `Mesh2d` and create `Camera` with
* hdr=false
* any tonemapper

You would get
```
wgpu error: Validation Error

Caused by:
    In Device::create_render_pipeline
      note: label = `sprite_pipeline`
    Error matching ShaderStages(FRAGMENT) shader requirements against the pipeline
    Shader global ResourceBinding { group: 0, binding: 19 } is not available in the pipeline layout
    Binding is missing from the pipeline layout
```
Because of missing tonemapping LUT bindings 

## Solution
Add missing bindings for tonemapping LUT's to `SpritePipeline` &
`Mesh2dPipeline`

## Testing
I checked that
* `tonemapping`
* `color_grading`
* `sprite_animations`
* `2d_shapes`
* `meshlet`
* `deferred_rendering`
examples are still working

2d cases I checked with this code:
```
use bevy::{
    color::palettes::css::PURPLE, core_pipeline::tonemapping::Tonemapping, prelude::*,
    sprite::MaterialMesh2dBundle,
};

fn main() {
    App::new()
        .add_plugins(DefaultPlugins)
        .add_systems(Startup, setup)
        .add_systems(Update, toggle_tonemapping_method)
        .run();
}

fn setup(
    mut commands: Commands,
    mut meshes: ResMut<Assets<Mesh>>,
    mut materials: ResMut<Assets<ColorMaterial>>,
    asset_server: Res<AssetServer>,
) {
    commands.spawn(Camera2dBundle {
        camera: Camera {
            hdr: false,
            ..default()
        },
        tonemapping: Tonemapping::BlenderFilmic,
        ..default()
    });
    commands.spawn(MaterialMesh2dBundle {
        mesh: meshes.add(Rectangle::default()).into(),
        transform: Transform::default().with_scale(Vec3::splat(128.)),
        material: materials.add(Color::from(PURPLE)),
        ..default()
    });

    commands.spawn(SpriteBundle {
        texture: asset_server.load("asd.png"),
        ..default()
    });
}

fn toggle_tonemapping_method(
    keys: Res<ButtonInput<KeyCode>>,
    mut tonemapping: Query<&mut Tonemapping>,
) {
    let mut method = tonemapping.single_mut();

    if keys.just_pressed(KeyCode::Digit1) {
        *method = Tonemapping::None;
    } else if keys.just_pressed(KeyCode::Digit2) {
        *method = Tonemapping::Reinhard;
    } else if keys.just_pressed(KeyCode::Digit3) {
        *method = Tonemapping::ReinhardLuminance;
    } else if keys.just_pressed(KeyCode::Digit4) {
        *method = Tonemapping::AcesFitted;
    } else if keys.just_pressed(KeyCode::Digit5) {
        *method = Tonemapping::AgX;
    } else if keys.just_pressed(KeyCode::Digit6) {
        *method = Tonemapping::SomewhatBoringDisplayTransform;
    } else if keys.just_pressed(KeyCode::Digit7) {
        *method = Tonemapping::TonyMcMapface;
    } else if keys.just_pressed(KeyCode::Digit8) {
        *method = Tonemapping::BlenderFilmic;
    }
}
```
---

## Changelog
Fix the bug which led to the crash when user uses any tonemapper without
hdr for rendering sprites and 2d meshes.
2024-05-28 12:09:26 +00:00
Patrick Walton
f398674e51
Implement opt-in sharp screen-space reflections for the deferred renderer, with improved raymarching code. (#13418)
This commit, a revamp of #12959, implements screen-space reflections
(SSR), which approximate real-time reflections based on raymarching
through the depth buffer and copying samples from the final rendered
frame. This patch is a relatively minimal implementation of SSR, so as
to provide a flexible base on which to customize and build in the
future. However, it's based on the production-quality [raymarching code
by Tomasz
Stachowiak](https://gist.github.com/h3r2tic/9c8356bdaefbe80b1a22ae0aaee192db).

For a general basic overview of screen-space reflections, see
[1](https://lettier.github.io/3d-game-shaders-for-beginners/screen-space-reflection.html).
The raymarching shader uses the basic algorithm of tracing forward in
large steps, refining that trace in smaller increments via binary
search, and then using the secant method. No temporal filtering or
roughness blurring, is performed at all; for this reason, SSR currently
only operates on very shiny surfaces. No acceleration via the
hierarchical Z-buffer is implemented (though note that
https://github.com/bevyengine/bevy/pull/12899 will add the
infrastructure for this). Reflections are traced at full resolution,
which is often considered slow. All of these improvements and more can
be follow-ups.

SSR is built on top of the deferred renderer and is currently only
supported in that mode. Forward screen-space reflections are possible
albeit uncommon (though e.g. *Doom Eternal* uses them); however, they
require tracing from the previous frame, which would add complexity.
This patch leaves the door open to implementing SSR in the forward
rendering path but doesn't itself have such an implementation.
Screen-space reflections aren't supported in WebGL 2, because they
require sampling from the depth buffer, which Naga can't do because of a
bug (`sampler2DShadow` is incorrectly generated instead of `sampler2D`;
this is the same reason why depth of field is disabled on that
platform).

To add screen-space reflections to a camera, use the
`ScreenSpaceReflectionsBundle` bundle or the
`ScreenSpaceReflectionsSettings` component. In addition to
`ScreenSpaceReflectionsSettings`, `DepthPrepass` and `DeferredPrepass`
must also be present for the reflections to show up. The
`ScreenSpaceReflectionsSettings` component contains several settings
that artists can tweak, and also comes with sensible defaults.

A new example, `ssr`, has been added. It's loosely based on the
[three.js ocean
sample](https://threejs.org/examples/webgl_shaders_ocean.html), but all
the assets are original. Note that the three.js demo has no screen-space
reflections and instead renders a mirror world. In contrast to #12959,
this demo tests not only a cube but also a more complex model (the
flight helmet).

## Changelog

### Added

* Screen-space reflections can be enabled for very smooth surfaces by
adding the `ScreenSpaceReflections` component to a camera. Deferred
rendering must be enabled for the reflections to appear.

![Screenshot 2024-05-18
143555](https://github.com/bevyengine/bevy/assets/157897/b8675b39-8a89-433e-a34e-1b9ee1233267)

![Screenshot 2024-05-18
143606](https://github.com/bevyengine/bevy/assets/157897/cc9e1cd0-9951-464a-9a08-e589210e5606)
2024-05-27 13:43:40 +00:00
Daniel Miller
1d29f8e6f6
Added a Grey trait, and implementations on baked-in colors. Fixes #13206 (#13237)
Added a Grey trait to allow colors to create a generic "grey" color.

This currently assumes the color spaces follow the same gradient, which
I'm pretty sure isn't true, but it should make a "grey-ish" color
relative to the provided intensity.

# Objective

- Implements #13206 

## Solution

- A small `Grey` trait was added and implemented for the common color
kinds.

## Testing

- Currently untested, unit tests exposed the non-linear relation between
colors. I am debating adding an example to show this, as I have no idea
what color space represents what relation of grey, and I figure others
may be similarly confused.

## Changelog

- The `Grey` trait was added, and the corresponding `grey` 

## BREAKING CHANGES

The const qualifier for LinearRGBA::gray was removed (the symbol still
exists via a trait, it's just not const anymore)
2024-05-26 12:53:50 +00:00
Patrick Walton
9da0b2a0ec
Make render phases render world resources instead of components. (#13277)
This commit makes us stop using the render world ECS for
`BinnedRenderPhase` and `SortedRenderPhase` and instead use resources
with `EntityHashMap`s inside. There are three reasons to do this:

1. We can use `clear()` to clear out the render phase collections
instead of recreating the components from scratch, allowing us to reuse
allocations.

2. This is a prerequisite for retained bins, because components can't be
retained from frame to frame in the render world, but resources can.

3. We want to move away from storing anything in components in the
render world ECS, and this is a step in that direction.

This patch results in a small performance benefit, due to point (1)
above.

## Changelog

### Changed

* The `BinnedRenderPhase` and `SortedRenderPhase` render world
components have been replaced with `ViewBinnedRenderPhases` and
`ViewSortedRenderPhases` resources.

## Migration Guide

* The `BinnedRenderPhase` and `SortedRenderPhase` render world
components have been replaced with `ViewBinnedRenderPhases` and
`ViewSortedRenderPhases` resources. Instead of querying for the
components, look the camera entity up in the
`ViewBinnedRenderPhases`/`ViewSortedRenderPhases` tables.
2024-05-21 18:23:04 +00:00
BD103
53f4c38e7b
Fix lints on beta Rust (#13444)
# Objective

- Fixes #13437!

## Solution

- Use `f32::INFINITY` instead of `std::f32::INFINITY`.

## Testing

```shell
cargo +beta clippy --workspace --all-targets --all-features -- -Dwarnings
```
2024-05-20 20:40:59 +00:00
Patrick Walton
19bfa41768
Implement volumetric fog and volumetric lighting, also known as light shafts or god rays. (#13057)
This commit implements a more physically-accurate, but slower, form of
fog than the `bevy_pbr::fog` module does. Notably, this *volumetric fog*
allows for light beams from directional lights to shine through,
creating what is known as *light shafts* or *god rays*.

To add volumetric fog to a scene, add `VolumetricFogSettings` to the
camera, and add `VolumetricLight` to directional lights that you wish to
be volumetric. `VolumetricFogSettings` has numerous settings that allow
you to define the accuracy of the simulation, as well as the look of the
fog. Currently, only interaction with directional lights that have
shadow maps is supported. Note that the overhead of the effect scales
directly with the number of directional lights in use, so apply
`VolumetricLight` sparingly for the best results.

The overall algorithm, which is implemented as a postprocessing effect,
is a combination of the techniques described in [Scratchapixel] and
[this blog post]. It uses raymarching in screen space, transformed into
shadow map space for sampling and combined with physically-based
modeling of absorption and scattering. Bevy employs the widely-used
[Henyey-Greenstein phase function] to model asymmetry; this essentially
allows light shafts to fade into and out of existence as the user views
them.

Volumetric rendering is a huge subject, and I deliberately kept the
scope of this commit small. Possible follow-ups include:

1. Raymarching at a lower resolution.

2. A post-processing blur (especially useful when combined with (1)).

3. Supporting point lights and spot lights.

4. Supporting lights with no shadow maps.

5. Supporting irradiance volumes and reflection probes.

6. Voxel components that reuse the volumetric fog code to create voxel
shapes.

7. *Horizon: Zero Dawn*-style clouds.

These are all useful, but out of scope of this patch for now, to keep
things tidy and easy to review.

A new example, `volumetric_fog`, has been added to demonstrate the
effect.

## Changelog

### Added

* A new component, `VolumetricFog`, is available, to allow for a more
physically-accurate, but more resource-intensive, form of fog.

* A new component, `VolumetricLight`, can be placed on directional
lights to make them interact with `VolumetricFog`. Notably, this allows
such lights to emit light shafts/god rays.

![Screenshot 2024-04-21
162808](https://github.com/bevyengine/bevy/assets/157897/7a1fc81d-eed5-4735-9419-286c496391a9)

![Screenshot 2024-04-21
132005](https://github.com/bevyengine/bevy/assets/157897/e6d3b5ca-8f59-488d-a3de-15e95aaf4995)

[Scratchapixel]:
https://www.scratchapixel.com/lessons/3d-basic-rendering/volume-rendering-for-developers/intro-volume-rendering.html

[this blog post]: https://www.alexandre-pestana.com/volumetric-lights/

[Henyey-Greenstein phase function]:
https://www.pbr-book.org/4ed/Volume_Scattering/Phase_Functions#TheHenyeyndashGreensteinPhaseFunction
2024-05-16 17:13:18 +00:00
IceSentry
d9993a8092
Enable depth of field on webgpu (#13374)
# Objective

- Depth of field is currently disabled on any wasm targets, but the bug
it's trying to avoid is only an issue in webgl.

## Solution

- Enable dof when compiling for webgpu
- I also remove the msaa check because sampling a depth texture doesn't
work with or without msaa in webgl
- Unfortunately, Bokeh seems to be broken when using webgpu, so default
to Gaussian instead to make sure the defaults have the broadest platform
support

## Testing

- I added dof to the 3d_shapes example and compiled it to webgpu to
confirm it works
- I also tried compiling to webgl to confirm things still works and dof
isn't rendered.

---------

Co-authored-by: James Liu <contact@jamessliu.com>
2024-05-16 06:48:28 +00:00
Pietro
d17fb160b0
Fix ClearColor in 2d pipelines (#13378)
# Objective

- Fixes #13377
- Fixes https://github.com/bevyengine/bevy/issues/13383

## Solution

- Even if the number of renderables is empty, the transparent phase need
to run to set the clear color.

## Testing

- Tested on the `clear_color` example
2024-05-15 20:36:02 +00:00
Patrick Walton
df31b808c3
Implement fast depth of field as a postprocessing effect. (#13009)
This commit implements the [depth of field] effect, simulating the blur
of objects out of focus of the virtual lens. Either the [hexagonal
bokeh] effect or a faster Gaussian blur may be used. In both cases, the
implementation is a simple separable two-pass convolution. This is not
the most physically-accurate real-time bokeh technique that exists;
Unreal Engine has [a more accurate implementation] of "cinematic depth
of field" from 2018. However, it's simple, and most engines provide
something similar as a fast option, often called "mobile" depth of
field.

The general approach is outlined in [a blog post from 2017]. We take
advantage of the fact that both Gaussian blurs and hexagonal bokeh blurs
are *separable*. This means that their 2D kernels can be reduced to a
small number of 1D kernels applied one after another, asymptotically
reducing the amount of work that has to be done. Gaussian blurs can be
accomplished by blurring horizontally and then vertically, while
hexagonal bokeh blurs can be done with a vertical blur plus a diagonal
blur, plus two diagonal blurs. In both cases, only two passes are
needed. Bokeh requires the first pass to have a second render target and
requires two subpasses in the second pass, which decreases its
performance relative to the Gaussian blur.

The bokeh blur is generally more aesthetically pleasing than the
Gaussian blur, as it simulates the effect of a camera more accurately.
The shape of the bokeh circles are determined by the number of blades of
the aperture. In our case, we use a hexagon, which is usually considered
specific to lower-quality cameras. (This is a downside of the fast
hexagon approach compared to the higher-quality approaches.) The blur
amount is generally specified by the [f-number], which we use to compute
the focal length from the film size and FOV. By default, we simulate
standard cinematic cameras of f/1 and [Super 35]. The developer can
customize these values as desired.

A new example has been added to demonstrate depth of field. It allows
customization of the mode (Gaussian vs. bokeh), focal distance and
f-numbers. The test scene is inspired by a [blog post on depth of field
in Unity]; however, the effect is implemented in a completely different
way from that blog post, and all the assets (textures, etc.) are
original.

Bokeh depth of field:
![Screenshot 2024-04-17
152535](https://github.com/bevyengine/bevy/assets/157897/702f0008-1c8a-4cf3-b077-4110f8c46584)

Gaussian depth of field:
![Screenshot 2024-04-17
152542](https://github.com/bevyengine/bevy/assets/157897/f4ece47a-520e-4483-a92d-f4fa760795d3)

No depth of field:
![Screenshot 2024-04-17
152547](https://github.com/bevyengine/bevy/assets/157897/9444e6aa-fcae-446c-b66b-89469f1a1325)

[depth of field]: https://en.wikipedia.org/wiki/Depth_of_field

[hexagonal bokeh]:
https://colinbarrebrisebois.com/2017/04/18/hexagonal-bokeh-blur-revisited/

[a more accurate implementation]:
https://epicgames.ent.box.com/s/s86j70iamxvsuu6j35pilypficznec04

[a blog post from 2017]:
https://colinbarrebrisebois.com/2017/04/18/hexagonal-bokeh-blur-revisited/

[f-number]: https://en.wikipedia.org/wiki/F-number

[Super 35]: https://en.wikipedia.org/wiki/Super_35

[blog post on depth of field in Unity]:
https://catlikecoding.com/unity/tutorials/advanced-rendering/depth-of-field/

## Changelog

### Added

* A depth of field postprocessing effect is now available, to simulate
objects being out of focus of the camera. To use it, add
`DepthOfFieldSettings` to an entity containing a `Camera3d` component.

---------

Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
Co-authored-by: Bram Buurlage <brambuurlage@gmail.com>
2024-05-13 18:23:56 +00:00
Bram Buurlage
bfc13383e0
Fix incorrect workgroupBarrier and OOB array access in auto_exposure (#13283)
This commit fixes two issues in auto_exposure.wgsl:
* A `storageBarrier()` was incorrectly used where a `workgroupBarrier()`
should be used instead;
* Resetting the `histogram_shared` array would write beyond the 64th
index, which is out of bounds.

## Solution

The first issue is fixed by using the appropriate workgroupBarrier
instead;
The second issue is fixed by adding a range check before setting
`histogram_shared[local_invocation_index] = 0u`.

## Testing

These changes were tested using the Xcode metal profiler, and I could
not find any noticable change in compute shader performance.
2024-05-12 23:24:58 +00:00
Rob Parrett
2fd432c463
Fix motion blur on wasm (#13099)
# Objective

Fixes #13097 and other issues preventing the motion blur example from
working on wasm

## Solution

- Use a vec2 for padding
- Fix error initializing the `MotionBlur` struct on wasm+webgl2
- Disable MSAA on wasm+webgl2
- Fix `GlobalsUniform` padding getting added on the shader side for
webgpu builds

## Notes

The motion blur example now runs, but with artifacts. In addition to the
obvious black artifacts, the motion blur or dithering seem to just look
worse in a way I can't really describe. That may be expected.

```
AdapterInfo { name: "ANGLE (Apple, ANGLE Metal Renderer: Apple M1 Max, Unspecified Version)", vendor: 4203, device: 0, device_type: IntegratedGpu, driver: "", driver_info: "", backend: Gl }
```
<img width="1276" alt="Screenshot 2024-04-25 at 6 51 21 AM"
src="https://github.com/bevyengine/bevy/assets/200550/65401d4f-92fe-454b-9dbc-a2d89d3ad963">
2024-05-12 21:03:36 +00:00
IceSentry
64e1a7835a
Clean up 2d render phases (#12982)
# Objective

Currently, the 2d pipeline only has a transparent pass that is used for
everything. I want to have separate passes for opaque/alpha
mask/transparent meshes just like in 3d.

This PR does the basic work to start adding new phases to the 2d
pipeline and get the current setup a bit closer to 3d.

## Solution

- Use `ViewNode` for `MainTransparentPass2dNode`
- Added `Node2d::StartMainPass`, `Node2d::EndMainPass`
- Rename everything to clarify that the main pass is currently the
transparent pass

---

## Changelog

- Added `Node2d::StartMainPass`, `Node2d::EndMainPass`

## Migration Guide

If you were using `Node2d::MainPass` to order your own custom render
node. You now need to order it relative to `Node2d::StartMainPass` or
`Node2d::EndMainPass`.
2024-05-08 08:13:39 +00:00
arcashka
6027890a11
move wgsl color operations from bevy_pbr to bevy_render (#13209)
# Objective

`bevy_pbr/utils.wgsl` shader file contains mathematical constants and
color conversion functions. Both of those should be accessible without
enabling `bevy_pbr` feature. For example, tonemapping can be done in non
pbr scenario, and it uses color conversion functions.

Fixes #13207

## Solution

* Move mathematical constants (such as PI, E) from
`bevy_pbr/src/render/utils.wgsl` into `bevy_render/src/maths.wgsl`
* Move color conversion functions from `bevy_pbr/src/render/utils.wgsl`
into new file `bevy_render/src/color_operations.wgsl`

## Testing
Ran multiple examples, checked they are working:
* tonemapping
* color_grading
* 3d_scene
* animated_material
* deferred_rendering
* 3d_shapes
* fog
* irradiance_volumes
* meshlet
* parallax_mapping
* pbr
* reflection_probes
* shadow_biases
* 2d_gizmos
* light_gizmos
---

## Changelog
* Moved mathematical constants (such as PI, E) from
`bevy_pbr/src/render/utils.wgsl` into `bevy_render/src/maths.wgsl`
* Moved color conversion functions from `bevy_pbr/src/render/utils.wgsl`
into new file `bevy_render/src/color_operations.wgsl`

## Migration Guide
In user's shader code replace usage of mathematical constants from
`bevy_pbr::utils` to the usage of the same constants from
`bevy_render::maths`.
2024-05-04 10:30:23 +00:00
Bram Buurlage
d390420093
Implement Auto Exposure plugin (#12792)
# Objective

- Add auto exposure/eye adaptation to the bevy render pipeline.
- Support features that users might expect from other engines:
  - Metering masks
  - Compensation curves
  - Smooth exposure transitions 

This PR is based on an implementation I already built for a personal
project before https://github.com/bevyengine/bevy/pull/8809 was
submitted, so I wasn't able to adopt that PR in the proper way. I've
still drawn inspiration from it, so @fintelia should be credited as
well.

## Solution

An auto exposure compute shader builds a 64 bin histogram of the scene's
luminance, and then adjusts the exposure based on that histogram. Using
a histogram allows the system to ignore outliers like shadows and
specular highlights, and it allows to give more weight to certain areas
based on a mask.

---

## Changelog

- Added: AutoExposure plugin that allows to adjust a camera's exposure
based on it's scene's luminance.

---------

Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
2024-05-03 17:45:17 +00:00
Patrick Walton
31835ff76d
Implement visibility ranges, also known as hierarchical levels of detail (HLODs). (#12916)
Implement visibility ranges, also known as hierarchical levels of detail
(HLODs).

This commit introduces a new component, `VisibilityRange`, which allows
developers to specify camera distances in which meshes are to be shown
and hidden. Hiding meshes happens early in the rendering pipeline, so
this feature can be used for level of detail optimization. Additionally,
this feature is properly evaluated per-view, so different views can show
different levels of detail.

This feature differs from proper mesh LODs, which can be implemented
later. Engines generally implement true mesh LODs later in the pipeline;
they're typically more efficient than HLODs with GPU-driven rendering.
However, mesh LODs are more limited than HLODs, because they require the
lower levels of detail to be meshes with the same vertex layout and
shader (and perhaps the same material) as the original mesh. Games often
want to use objects other than meshes to replace distant models, such as
*octahedral imposters* or *billboard imposters*.

The reason why the feature is called *hierarchical level of detail* is
that HLODs can replace multiple meshes with a single mesh when the
camera is far away. This can be useful for reducing drawcall count. Note
that `VisibilityRange` doesn't automatically propagate down to children;
it must be placed on every mesh.

Crossfading between different levels of detail is supported, using the
standard 4x4 ordered dithering pattern from [1]. The shader code to
compute the dithering patterns should be well-optimized. The dithering
code is only active when visibility ranges are in use for the mesh in
question, so that we don't lose early Z.

Cascaded shadow maps show the HLOD level of the view they're associated
with. Point light and spot light shadow maps, which have no CSMs,
display all HLOD levels that are visible in any view. To support this
efficiently and avoid doing visibility checks multiple times, we
precalculate all visible HLOD levels for each entity with a
`VisibilityRange` during the `check_visibility_range` system.

A new example, `visibility_range`, has been added to the tree, as well
as a new low-poly version of the flight helmet model to go with it. It
demonstrates use of the visibility range feature to provide levels of
detail.

[1]: https://en.wikipedia.org/wiki/Ordered_dithering#Threshold_map

[^1]: Unreal doesn't have a feature that exactly corresponds to
visibility ranges, but Unreal's HLOD system serves roughly the same
purpose.

## Changelog

### Added

* A new `VisibilityRange` component is available to conditionally enable
entity visibility at camera distances, with optional crossfade support.
This can be used to implement different levels of detail (LODs).

## Screenshots

High-poly model:
![Screenshot 2024-04-09
185541](https://github.com/bevyengine/bevy/assets/157897/7e8be017-7187-4471-8866-974e2d8f2623)

Low-poly model up close:
![Screenshot 2024-04-09
185546](https://github.com/bevyengine/bevy/assets/157897/429603fe-6bb7-4246-8b4e-b4888fd1d3a0)

Crossfading between the two:
![Screenshot 2024-04-09
185604](https://github.com/bevyengine/bevy/assets/157897/86d0d543-f8f3-49ec-8fe5-caa4d0784fd4)

---------

Co-authored-by: Carter Anderson <mcanders1@gmail.com>
2024-05-03 00:11:35 +00:00
BD103
e357b63448
Add README.md to all crates (#13184)
# Objective

- `README.md` is a common file that usually gives an overview of the
folder it is in.
- When on <https://crates.io>, `README.md` is rendered as the main
description.
- Many crates in this repository are lacking `README.md` files, which
makes it more difficult to understand their purpose.

<img width="1552" alt="image"
src="https://github.com/bevyengine/bevy/assets/59022059/78ebf91d-b0c4-4b18-9874-365d6310640f">

- There are also a few inconsistencies with `README.md` files that this
PR and its follow-ups intend to fix.

## Solution

- Create a `README.md` file for all crates that do not have one.
- This file only contains the title of the crate (underscores removed,
proper capitalization, acronyms expanded) and the <https://shields.io>
badges.
- Remove the `readme` field in `Cargo.toml` for `bevy` and
`bevy_reflect`.
- This field is redundant because [Cargo automatically detects
`README.md`
files](https://doc.rust-lang.org/cargo/reference/manifest.html#the-readme-field).
The field is only there if you name it something else, like `INFO.md`.
- Fix capitalization of `bevy_utils`'s `README.md`.
- It was originally `Readme.md`, which is inconsistent with the rest of
the project.
- I created two commits renaming it to `README.md`, because Git appears
to be case-insensitive.
- Expand acronyms in title of `bevy_ptr` and `bevy_utils`.
- In the commit where I created all the new `README.md` files, I
preferred using expanded acronyms in the titles. (E.g. "Bevy Developer
Tools" instead of "Bevy Dev Tools".)
- This commit changes the title of existing `README.md` files to follow
the same scheme.
- I do not feel strongly about this change, please comment if you
disagree and I can revert it.
- Add <https://shields.io> badges to `bevy_time` and `bevy_transform`,
which are the only crates currently lacking them.

---

## Changelog

- Added `README.md` files to all crates missing it.
2024-05-02 18:56:00 +00:00
Patrick Walton
961b24deaf
Implement filmic color grading. (#13121)
This commit expands Bevy's existing tonemapping feature to a complete
set of filmic color grading tools, matching those of engines like Unity,
Unreal, and Godot. The following features are supported:

* White point adjustment. This is inspired by Unity's implementation of
the feature, but simplified and optimized. *Temperature* and *tint*
control the adjustments to the *x* and *y* chromaticity values of [CIE
1931]. Following Unity, the adjustments are made relative to the [D65
standard illuminant] in the [LMS color space].

* Hue rotation. This simply converts the RGB value to [HSV], alters the
hue, and converts back.

* Color correction. This allows the *gamma*, *gain*, and *lift* values
to be adjusted according to the standard [ASC CDL combined function].

* Separate color correction for shadows, midtones, and highlights.
Blender's source code was used as a reference for the implementation of
this. The midtone ranges can be adjusted by the user. To avoid abrupt
color changes, a small crossfade is used between the different sections
of the image, again following Blender's formulas.

A new example, `color_grading`, has been added, offering a GUI to change
all the color grading settings. It uses the same test scene as the
existing `tonemapping` example, which has been factored out into a
shared glTF scene.

[CIE 1931]: https://en.wikipedia.org/wiki/CIE_1931_color_space

[D65 standard illuminant]:
https://en.wikipedia.org/wiki/Standard_illuminant#Illuminant_series_D

[LMS color space]: https://en.wikipedia.org/wiki/LMS_color_space

[HSV]: https://en.wikipedia.org/wiki/HSL_and_HSV

[ASC CDL combined function]:
https://en.wikipedia.org/wiki/ASC_CDL#Combined_Function

## Changelog

### Added

* Many new filmic color grading options have been added to the
`ColorGrading` component.

## Migration Guide

* `ColorGrading::gamma` and `ColorGrading::pre_saturation` are now set
separately for the `shadows`, `midtones`, and `highlights` sections. You
can migrate code with the `ColorGrading::all_sections` and
`ColorGrading::all_sections_mut` functions, which access and/or update
all sections at once.
* `ColorGrading::post_saturation` and `ColorGrading::exposure` are now
fields of `ColorGrading::global`.

## Screenshots

![Screenshot 2024-04-27
143144](https://github.com/bevyengine/bevy/assets/157897/c1de5894-917d-4101-b5c9-e644d141a941)

![Screenshot 2024-04-27
143216](https://github.com/bevyengine/bevy/assets/157897/da393c8a-d747-42f5-b47c-6465044c788d)
2024-05-02 12:18:59 +00:00
Patrick Walton
16531fb3e3
Implement GPU frustum culling. (#12889)
This commit implements opt-in GPU frustum culling, built on top of the
infrastructure in https://github.com/bevyengine/bevy/pull/12773. To
enable it on a camera, add the `GpuCulling` component to it. To
additionally disable CPU frustum culling, add the `NoCpuCulling`
component. Note that adding `GpuCulling` without `NoCpuCulling`
*currently* does nothing useful. The reason why `GpuCulling` doesn't
automatically imply `NoCpuCulling` is that I intend to follow this patch
up with GPU two-phase occlusion culling, and CPU frustum culling plus
GPU occlusion culling seems like a very commonly-desired mode.

Adding the `GpuCulling` component to a view puts that view into
*indirect mode*. This mode makes all drawcalls indirect, relying on the
mesh preprocessing shader to allocate instances dynamically. In indirect
mode, the `PreprocessWorkItem` `output_index` points not to a
`MeshUniform` instance slot but instead to a set of `wgpu`
`IndirectParameters`, from which it allocates an instance slot
dynamically if frustum culling succeeds. Batch building has been updated
to allocate and track indirect parameter slots, and the AABBs are now
supplied to the GPU as `MeshCullingData`.

A small amount of code relating to the frustum culling has been borrowed
from meshlets and moved into `maths.wgsl`. Note that standard Bevy
frustum culling uses AABBs, while meshlets use bounding spheres; this
means that not as much code can be shared as one might think.

This patch doesn't provide any way to perform GPU culling on shadow
maps, to avoid making this patch bigger than it already is. That can be
a followup.

## Changelog

### Added

* Frustum culling can now optionally be done on the GPU. To enable it,
add the `GpuCulling` component to a camera.
* To disable CPU frustum culling, add `NoCpuCulling` to a camera. Note
that `GpuCulling` doesn't automatically imply `NoCpuCulling`.
2024-04-28 12:50:00 +00:00
Aevyrie
ade70b3925
Per-Object Motion Blur (#9924)
https://github.com/bevyengine/bevy/assets/2632925/e046205e-3317-47c3-9959-fc94c529f7e0

# Objective

- Adds per-object motion blur to the core 3d pipeline. This is a common
effect used in games and other simulations.
- Partially resolves #4710

## Solution

- This is a post-process effect that uses the depth and motion vector
buffers to estimate per-object motion blur. The implementation is
combined from knowledge from multiple papers and articles. The approach
itself, and the shader are quite simple. Most of the effort was in
wiring up the bevy rendering plumbing, and properly specializing for HDR
and MSAA.
- To work with MSAA, the MULTISAMPLED_SHADING wgpu capability is
required. I've extracted this code from #9000. This is because the
prepass buffers are multisampled, and require accessing with
`textureLoad` as opposed to the widely compatible `textureSample`.
- Added an example to demonstrate the effect of motion blur parameters.

## Future Improvements

- While this approach does have limitations, it's one of the most
commonly used, and is much better than camera motion blur, which does
not consider object velocity. For example, this implementation allows a
dolly to track an object, and that object will remain unblurred while
the background is blurred. The biggest issue with this implementation is
that blur is constrained to the boundaries of objects which results in
hard edges. There are solutions to this by either dilating the object or
the motion vector buffer, or by taking a different approach such as
https://casual-effects.com/research/McGuire2012Blur/index.html
- I'm using a noise PRNG function to jitter samples. This could be
replaced with a blue noise texture lookup or similar, however after
playing with the parameters, it gives quite nice results with 4 samples,
and is significantly better than the artifacts generated when not
jittering.

---

## Changelog

- Added: per-object motion blur. This can be enabled and configured by
adding the `MotionBlurBundle` to a camera entity.

---------

Co-authored-by: Torstein Grindvik <52322338+torsteingrindvik@users.noreply.github.com>
2024-04-25 01:16:02 +00:00
vero
158defd67b
Document Camera coordinate space (#13012)
# Objective

Missing docs

## Solution

Add docs paraphrased from the Cart's mouth:
https://discord.com/channels/691052431525675048/691052431974465548/1172305792154738759
> It follows the natural "results" of right handed y-up. The default
camera will face "forward" in -Z, with +X being "right". The RH y-up
setup is reasonably common. Thats why I asked for existing examples.I
think we should appeal to the masses here / see how other RH Y-up 3D
packages / engines handle this

---------

Co-authored-by: Robert Swain <robert.swain@gmail.com>
2024-04-18 13:02:15 +00:00
Brezak
368c5cef1a
Implement clone for most bundles. (#12993)
# Objective

Closes #12985.

## Solution

- Derive clone for most types with bundle in their name.
- Bundle types missing clone:
-
[`TextBundle`](https://docs.rs/bevy/latest/bevy/prelude/struct.TextBundle.html)
(Contains
[`ContentSize`](https://docs.rs/bevy/latest/bevy/ui/struct.ContentSize.html)
which can't be cloned because it itself contains a `Option<MeasureFunc>`
where
[`MeasureFunc`](https://docs.rs/taffy/0.3.18/taffy/node/enum.MeasureFunc.html)
isn't clone)
-
[`ImageBundle`](https://docs.rs/bevy/latest/bevy/prelude/struct.ImageBundle.html)
(Same as `TextBundle`)
-
[`AtlasImageBundle`](https://docs.rs/bevy/latest/bevy/prelude/struct.AtlasImageBundle.html)
(Will be deprecated in 0.14 there's no point)
2024-04-16 16:37:09 +00:00
BD103
7b8d502083
Fix beta lints (#12980)
# Objective

- Fixes #12976

## Solution

This one is a doozy.

- Run `cargo +beta clippy --workspace --all-targets --all-features` and
fix all issues
- This includes:
- Moving inner attributes to be outer attributes, when the item in
question has both inner and outer attributes
  - Use `ptr::from_ref` in more scenarios
- Extend the valid idents list used by `clippy:doc_markdown` with more
names
  - Use `Clone::clone_from` when possible
  - Remove redundant `ron` import
  - Add backticks to **so many** identifiers and items
    - I'm sorry whoever has to review this

---

## Changelog

- Added links to more identifiers in documentation.
2024-04-16 02:46:46 +00:00
Robert Swain
ab7cbfa8fc
Consolidate Render(Ui)Materials(2d) into RenderAssets (#12827)
# Objective

- Replace `RenderMaterials` / `RenderMaterials2d` / `RenderUiMaterials`
with `RenderAssets` to enable implementing changes to one thing,
`RenderAssets`, that applies to all use cases rather than duplicating
changes everywhere for multiple things that should be one thing.
- Adopts #8149 

## Solution

- Make RenderAsset generic over the destination type rather than the
source type as in #8149
- Use `RenderAssets<PreparedMaterial<M>>` etc for render materials

---

## Changelog

- Changed:
- The `RenderAsset` trait is now implemented on the destination type.
Its `SourceAsset` associated type refers to the type of the source
asset.
- `RenderMaterials`, `RenderMaterials2d`, and `RenderUiMaterials` have
been replaced by `RenderAssets<PreparedMaterial<M>>` and similar.

## Migration Guide

- `RenderAsset` is now implemented for the destination type rather that
the source asset type. The source asset type is now the `RenderAsset`
trait's `SourceAsset` associated type.
2024-04-09 13:26:34 +00:00
robtfm
452821dd52
more robust gpu image use (#12606)
# Objective

make morph targets and tonemapping more tolerant of delayed image
loading.

neither of these actually fail currently unless using a bespoke loader
(and even then it would be rare), but i am working on adding throttling
for asset gpu uploads (as a stopgap until we can do proper asset
streaming) and they break with that.

## Solution

when a mesh with morph targets is uploaded to the gpu, the prepare
function uploads the morph target texture if it's available, otherwise
it uploads without morph targets. this is generally fine as long as
morph targets are typically loaded from bytes (in gltf loader), but may
fail for a custom loader if the asset server async-loads the target
texture and the texture is not available yet. the mesh fails to render
and doesn't update when the image is loaded
-> if morph targets are specified but not ready yet, retry mesh upload
next frame

tonemapping `unwrap`s on the lookup table image. this is never a problem
since the image is added via `include_bytes!`, but could be a problem in
future with asset gpu throttling/streaming.
-> if the lookup texture is not yet available, use a fallback
-> in the node, check if the fallback was used before caching the bind
group
2024-04-07 17:18:58 +00:00
JMS55
9264850a1c
Fix skybox wrong alpha (#12888)
Fix https://github.com/bevyengine/bevy/issues/12740
2024-04-06 08:08:55 +00:00
Cameron
01649f13e2
Refactor App and SubApp internals for better separation (#9202)
# Objective

This is a necessary precursor to #9122 (this was split from that PR to
reduce the amount of code to review all at once).

Moving `!Send` resource ownership to `App` will make it unambiguously
`!Send`. `SubApp` must be `Send`, so it can't wrap `App`.

## Solution

Refactor `App` and `SubApp` to not have a recursive relationship. Since
`SubApp` no longer wraps `App`, once `!Send` resources are moved out of
`World` and into `App`, `SubApp` will become unambiguously `Send`.

There could be less code duplication between `App` and `SubApp`, but
that would break `App` method chaining.

## Changelog

- `SubApp` no longer wraps `App`.
- `App` fields are no longer publicly accessible.
- `App` can no longer be converted into a `SubApp`.
- Various methods now return references to a `SubApp` instead of an
`App`.
## Migration Guide

- To construct a sub-app, use `SubApp::new()`. `App` can no longer
convert into `SubApp`.
- If you implemented a trait for `App`, you may want to implement it for
`SubApp` as well.
- If you're accessing `app.world` directly, you now have to use
`app.world()` and `app.world_mut()`.
- `App::sub_app` now returns `&SubApp`.
- `App::sub_app_mut`  now returns `&mut SubApp`.
- `App::get_sub_app` now returns `Option<&SubApp>.`
- `App::get_sub_app_mut` now returns `Option<&mut SubApp>.`
2024-03-31 03:16:10 +00:00
Patrick Walton
4dadebd9c4
Improve performance by binning together opaque items instead of sorting them. (#12453)
Today, we sort all entities added to all phases, even the phases that
don't strictly need sorting, such as the opaque and shadow phases. This
results in a performance loss because our `PhaseItem`s are rather large
in memory, so sorting is slow. Additionally, determining the boundaries
of batches is an O(n) process.

This commit makes Bevy instead applicable place phase items into *bins*
keyed by *bin keys*, which have the invariant that everything in the
same bin is potentially batchable. This makes determining batch
boundaries O(1), because everything in the same bin can be batched.
Instead of sorting each entity, we now sort only the bin keys. This
drops the sorting time to near-zero on workloads with few bins like
`many_cubes --no-frustum-culling`. Memory usage is improved too, with
batch boundaries and dynamic indices now implicit instead of explicit.
The improved memory usage results in a significant win even on
unbatchable workloads like `many_cubes --no-frustum-culling
--vary-material-data-per-instance`, presumably due to cache effects.

Not all phases can be binned; some, such as transparent and transmissive
phases, must still be sorted. To handle this, this commit splits
`PhaseItem` into `BinnedPhaseItem` and `SortedPhaseItem`. Most of the
logic that today deals with `PhaseItem`s has been moved to
`SortedPhaseItem`. `BinnedPhaseItem` has the new logic.

Frame time results (in ms/frame) are as follows:

| Benchmark                | `binning` | `main`  | Speedup |
| ------------------------ | --------- | ------- | ------- |
| `many_cubes -nfc -vpi` | 232.179     | 312.123   | 34.43%  |
| `many_cubes -nfc`        | 25.874 | 30.117 | 16.40%  |
| `many_foxes`             | 3.276 | 3.515 | 7.30%   |

(`-nfc` is short for `--no-frustum-culling`; `-vpi` is short for
`--vary-per-instance`.)

---

## Changelog

### Changed

* Render phases have been split into binned and sorted phases. Binned
phases, such as the common opaque phase, achieve improved CPU
performance by avoiding the sorting step.

## Migration Guide

- `PhaseItem` has been split into `BinnedPhaseItem` and
`SortedPhaseItem`. If your code has custom `PhaseItem`s, you will need
to migrate them to one of these two types. `SortedPhaseItem` requires
the fewest code changes, but you may want to pick `BinnedPhaseItem` if
your phase doesn't require sorting, as that enables higher performance.

## Tracy graphs

`many-cubes --no-frustum-culling`, `main` branch:
<img width="1064" alt="Screenshot 2024-03-12 180037"
src="https://github.com/bevyengine/bevy/assets/157897/e1180ce8-8e89-46d2-85e3-f59f72109a55">

`many-cubes --no-frustum-culling`, this branch:
<img width="1064" alt="Screenshot 2024-03-12 180011"
src="https://github.com/bevyengine/bevy/assets/157897/0899f036-6075-44c5-a972-44d95895f46c">

You can see that `batch_and_prepare_binned_render_phase` is a much
smaller fraction of the time. Zooming in on that function, with yellow
being this branch and red being `main`, we see:

<img width="1064" alt="Screenshot 2024-03-12 175832"
src="https://github.com/bevyengine/bevy/assets/157897/0dfc8d3f-49f4-496e-8825-a66e64d356d0">

The binning happens in `queue_material_meshes`. Again with yellow being
this branch and red being `main`:
<img width="1064" alt="Screenshot 2024-03-12 175755"
src="https://github.com/bevyengine/bevy/assets/157897/b9b20dc1-11c8-400c-a6cc-1c2e09c1bb96">

We can see that there is a small regression in `queue_material_meshes`
performance, but it's not nearly enough to outweigh the large gains in
`batch_and_prepare_binned_render_phase`.

---------

Co-authored-by: James Liu <contact@jamessliu.com>
2024-03-30 02:55:02 +00:00
Jacques Schutte
4508077297
Move FloatOrd into bevy_math (#12732)
# Objective

- Fixes #12712

## Solution

- Move the `float_ord.rs` file to `bevy_math`
- Change any `bevy_utils::FloatOrd` statements to `bevy_math::FloatOrd`

---

## Changelog

- Moved `FloatOrd` from `bevy_utils` to `bevy_math`

## Migration Guide

- References to `bevy_utils::FloatOrd` should be changed to
`bevy_math::FloatOrd`
2024-03-27 18:30:11 +00:00
James Liu
56bcbb0975
Forbid unsafe in most crates in the engine (#12684)
# Objective
Resolves #3824. `unsafe` code should be the exception, not the norm in
Rust. It's obviously needed for various use cases as it's interfacing
with platforms and essentially running the borrow checker at runtime in
the ECS, but the touted benefits of Bevy is that we are able to heavily
leverage Rust's safety, and we should be holding ourselves accountable
to that by minimizing our unsafe footprint.

## Solution
Deny `unsafe_code` workspace wide. Add explicit exceptions for the
following crates, and forbid it in almost all of the others.

* bevy_ecs - Obvious given how much unsafe is needed to achieve
performant results
* bevy_ptr - Works with raw pointers, even more low level than bevy_ecs.
 * bevy_render - due to needing to integrate with wgpu
 * bevy_window - due to needing to integrate with raw_window_handle
* bevy_utils - Several unsafe utilities used by bevy_ecs. Ideally moved
into bevy_ecs instead of made publicly usable.
 * bevy_reflect - Required for the unsafe type casting it's doing.
 * bevy_transform - for the parallel transform propagation
 * bevy_gizmos  - For the SystemParam impls it has.
* bevy_assets - To support reflection. Might not be required, not 100%
sure yet.
* bevy_mikktspace - due to being a conversion from a C library. Pending
safe rewrite.
* bevy_dynamic_plugin - Inherently unsafe due to the dynamic loading
nature.

Several uses of unsafe were rewritten, as they did not need to be using
them:

* bevy_text - a case of `Option::unchecked` could be rewritten as a
normal for loop and match instead of an iterator.
* bevy_color - the Pod/Zeroable implementations were replaceable with
bytemuck's derive macros.
2024-03-27 03:30:08 +00:00
JMS55
4f20faaa43
Meshlet rendering (initial feature) (#10164)
# Objective
- Implements a more efficient, GPU-driven
(https://github.com/bevyengine/bevy/issues/1342) rendering pipeline
based on meshlets.
- Meshes are split into small clusters of triangles called meshlets,
each of which acts as a mini index buffer into the larger mesh data.
Meshlets can be compressed, streamed, culled, and batched much more
efficiently than monolithic meshes.


![image](https://github.com/bevyengine/bevy/assets/47158642/cb2aaad0-7a9a-4e14-93b0-15d4e895b26a)

![image](https://github.com/bevyengine/bevy/assets/47158642/7534035b-1eb7-4278-9b99-5322e4401715)

# Misc
* Future work: https://github.com/bevyengine/bevy/issues/11518
* Nanite reference:
https://advances.realtimerendering.com/s2021/Karis_Nanite_SIGGRAPH_Advances_2021_final.pdf
Two pass occlusion culling explained very well:
https://medium.com/@mil_kru/two-pass-occlusion-culling-4100edcad501

---------

Co-authored-by: Ricky Taylor <rickytaylor26@gmail.com>
Co-authored-by: vero <email@atlasdostal.com>
Co-authored-by: François <mockersf@gmail.com>
Co-authored-by: atlas dostal <rodol@rivalrebels.com>
2024-03-25 19:08:27 +00:00
James Liu
f096ad4155
Set the logo and favicon for all of Bevy's published crates (#12696)
# Objective
Currently the built docs only shows the logo and favicon for the top
level `bevy` crate. This makes views like
https://docs.rs/bevy_ecs/latest/bevy_ecs/ look potentially unrelated to
the project at first glance.

## Solution
Reproduce the docs attributes for every crate that Bevy publishes.

Ideally this would be done with some workspace level Cargo.toml control,
but AFAICT, such support does not exist.
2024-03-25 18:52:50 +00:00
Ame
72c51cdab9
Make feature(doc_auto_cfg) work (#12642)
# Objective

- In #12366 `![cfg_attr(docsrs, feature(doc_auto_cfg))] `was added. But
to apply it it needs `--cfg=docsrs` in rustdoc-args.


## Solution

- Apply `--cfg=docsrs` to all crates and CI.

I also added `[package.metadata.docs.rs]` to all crates to avoid adding
code behind a feature and forget adding the metadata.

Before:

![Screenshot 2024-03-22 at 00 51
57](https://github.com/bevyengine/bevy/assets/104745335/6a9dfdaa-8710-4784-852b-5f9b74e3522c)

After:
![Screenshot 2024-03-22 at 00 51
32](https://github.com/bevyengine/bevy/assets/104745335/c5bd6d8e-8ddb-45b3-b844-5ecf9f88961c)
2024-03-23 02:22:52 +00:00
Stepan Koltsov
2c953914bc
Explain Camera2dBundle.projection needs to be set carefully (#11115)
Encountered it while implementing
https://github.com/bevyengine/bevy/pull/11109.

Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
2024-03-18 17:35:33 +00:00
LeshaInc
737b719dda
Add pipeline statistics (#9135)
# Objective

It's useful to have access to render pipeline statistics, since they
provide more information than FPS alone. For example, the number of
drawn triangles can be used to debug culling and LODs. The number of
fragment shader invocations can provide a more stable alternative metric
than GPU elapsed time.

See also: Render node GPU timing overlay #8067, which doesn't provide
pipeline statistics, but adds a nice overlay.

## Solution

Add `RenderDiagnosticsPlugin`, which enables collecting pipeline
statistics and CPU & GPU timings.

---

## Changelog

- Add `RenderDiagnosticsPlugin`
- Add `RenderContext::diagnostic_recorder` method

---------

Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
2024-03-17 20:29:35 +00:00
Al M
52e3f2007b
Add "all-features = true" to docs.rs metadata for most crates (#12366)
# Objective

Fix missing `TextBundle` (and many others) which are present in the main
crate as default features but optional in the sub-crate. See:

- https://docs.rs/bevy/0.13.0/bevy/ui/node_bundles/index.html
- https://docs.rs/bevy_ui/0.13.0/bevy_ui/node_bundles/index.html

~~There are probably other instances in other crates that I could track
down, but maybe "all-features = true" should be used by default in all
sub-crates? Not sure.~~ (There were many.) I only noticed this because
rust-analyzer's "open docs" features takes me to the sub-crate, not the
main one.

## Solution

Add "all-features = true" to docs.rs metadata for crates that use
features.

## Changelog

### Changed

- Unified features documented on docs.rs between main crate and
sub-crates
2024-03-08 20:03:09 +00:00
BD103
713d91b721
Improve Bloom 3D lighting (#11981)
# Objective

- With the recent lighting changes, the default configuration in the
`bloom_3d` example is less clear what bloom actually does
- See [this
screenshot](4fdb1455d5 (r1494648414))
for a comparison.
- `bloom_3d` additionally uses a for-loop to spawn the spheres, which
can be turned into `commands::spawn_batch` call.
- The text is black, which is difficult to see on the gray background.

## Solution

- Increase emmisive values of materials.
- Set text to white.

## Showcase

Before:

<img width="1392" alt="before"
src="https://github.com/bevyengine/bevy/assets/59022059/757057ad-ed9f-4eed-b135-8e2032fcdeb5">

After:

<img width="1392" alt="image"
src="https://github.com/bevyengine/bevy/assets/59022059/3f9dc7a8-94b2-44b9-8ac3-deef1905221b">

---------

Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
2024-03-07 15:20:38 +00:00
James Liu
512b7463a3
Disentangle bevy_utils/bevy_core's reexported dependencies (#12313)
# Objective
Make bevy_utils less of a compilation bottleneck. Tackle #11478.

## Solution
* Move all of the directly reexported dependencies and move them to
where they're actually used.
* Remove the UUID utilities that have gone unused since `TypePath` took
over for `TypeUuid`.
* There was also a extraneous bytemuck dependency on `bevy_core` that
has not been used for a long time (since `encase` became the primary way
to prepare GPU buffers).
* Remove the `all_tuples` macro reexport from bevy_ecs since it's
accessible from `bevy_utils`.

---

## Changelog
Removed: Many of the reexports from bevy_utils (petgraph, uuid, nonmax,
smallvec, and thiserror).
Removed: bevy_core's reexports of bytemuck.

## Migration Guide
bevy_utils' reexports of petgraph, uuid, nonmax, smallvec, and thiserror
have been removed.

bevy_core' reexports of bytemuck's types has been removed. 

Add them as dependencies in your own crate instead.
2024-03-07 02:30:15 +00:00
James Liu
9e5db9abc7
Clean up type registrations (#12314)
# Objective
Fix #12304. Remove unnecessary type registrations thanks to #4154.

## Solution
Conservatively remove type registrations. Keeping the top level
components, resources, and events, but dropping everything else that is
a type of a member of those types.
2024-03-06 16:05:53 +00:00
James Liu
5619bd09d1
Replace bevy_log's tracing reexport with bevy_utils' (#12254)
# Objective
Fixes #11298. Make the use of bevy_log vs bevy_utils::tracing more
consistent.

## Solution
Replace all uses of bevy_log's logging macros with the reexport from
bevy_utils. Remove bevy_log as a dependency where it's no longer needed
anymore.

Ideally we should just be using tracing directly, but given that all of
these crates are already using bevy_utils, this likely isn't that great
of a loss right now.
2024-03-02 18:38:04 +00:00
Sludge
ecdd284b0e
Register fxaa::Sensitivity and derive Debug (#12167)
# Objective

- More reflection

## Solution

- More. Reflection.

(not sure what bevy's policy on `derive(Debug)` is given that reflection
already lets you accomplish something largely equivalent; maybe
`derive(Reflect)` should generate a `Debug` impl that goes through
`Reflect::debug` unless you opt out?)
2024-02-28 03:22:08 +00:00
Alice Cecile
8ec65525ab
Port bevy_core_pipeline to LinearRgba (#12116)
# Objective

- We should move towards a consistent use of the new `bevy_color` crate.
- As discussed in #12089, splitting this work up into small pieces makes
it easier to review.

## Solution

- Port all uses of `LegacyColor` in the `bevy_core_pipeline` to
`LinearRgba`
- `LinearRgba` is the correct type to use for internal rendering types
- Added `LinearRgba::BLACK` and `WHITE` (used during migration)
- Add `LinearRgba::grey` to more easily construct balanced grey colors
(used during migration)
- Add a conversion from `LinearRgba` to `wgpu::Color`. The converse was
not done at this time, as this is typically a user error.

I did not change the field type of the clear color on the cameras: as
this is user-facing, this should be done in concert with the other
configurable fields.

## Migration Guide

`ColorAttachment` now stores a `LinearRgba` color, rather than a Bevy
0.13 `Color`.
`set_blend_constant` now takes a `LinearRgba` argument, rather than a
Bevy 0.13 `Color`.

---------

Co-authored-by: Alice Cecile <alice.i.cecil@gmail.com>
2024-02-26 12:25:11 +00:00