# Objective
- Implement `Meshable` for `Extrusion<T>`
## Solution
- `Meshable` requires `Meshable::Output: MeshBuilder` now. This means
that all `some_primitive.mesh()` calls now return a `MeshBuilder`. These
were added for primitives that did not have one prior to this.
- A new trait `Extrudable: MeshBuilder` has been added. This trait
allows you to specify the indices of the perimeter of the mesh created
by this `MeshBuilder` and whether they are to be shaded smooth or flat.
- `Extrusion<P: Primitive2d + Meshable>` is now `Meshable` aswell. The
associated `MeshBuilder` is `ExtrusionMeshBuilder` which is generic over
`P` and uses the `MeshBuilder` of its baseshape internally.
- `ExtrusionMeshBuilder` exposes the configuration functions of its
base-shapes builder.
- Updated the `3d_shapes` example to include `Extrusion`s
## Migration Guide
- Depending on the context, you may need to explicitly call
`.mesh().build()` on primitives where you have previously called
`.mesh()`
- The `Output` type of custom `Meshable` implementations must now derive
`MeshBuilder`.
## Additional information
- The extrusions UVs are done so that
- the front face (`+Z`) is in the area between `(0, 0)` and `(0.5,
0.5)`,
- the back face (`-Z`) is in the area between `(0.5, 0)` and `(1, 0.5)`
- the mantle is in the area between `(0, 0.5)` and `(1, 1)`. Each
`PerimeterSegment` you specified in the `Extrudable` implementation will
be allocated an equal portion of this area.
- The UVs of the base shape are scaled to be in the front/back area so
whatever method of filling the full UV-space the base shape used is how
these areas will be filled.
Here is an example of what that looks like on a capsule:
https://github.com/bevyengine/bevy/assets/62256001/425ad288-fbbc-4634-9d3f-5e846cdce85f
This is the texture used:
![extrusion
uvs](https://github.com/bevyengine/bevy/assets/62256001/4e54e421-bfda-44b9-8571-412525cebddf)
The `3d_shapes` example now looks like this:
![image_2024-05-22_235915753](https://github.com/bevyengine/bevy/assets/62256001/3d8bc86d-9ed1-47f2-899a-27aac0a265dd)
---------
Co-authored-by: Lynn Büttgenbach <62256001+solis-lumine-vorago@users.noreply.github.com>
Co-authored-by: Matty <weatherleymatthew@gmail.com>
Co-authored-by: Matty <2975848+mweatherley@users.noreply.github.com>
This commit implements a large subset of [*subpixel morphological
antialiasing*], better known as SMAA. SMAA is a 2011 antialiasing
technique that detects jaggies in an aliased image and smooths them out.
Despite its age, it's been a continual staple of games for over a
decade. Four quality presets are available: *low*, *medium*, *high*, and
*ultra*. I set the default to *high*, on account of modern GPUs being
significantly faster than they were in 2011.
Like the already-implemented FXAA, SMAA works on an unaliased image.
Unlike FXAA, it requires three passes: (1) edge detection; (2) blending
weight calculation; (3) neighborhood blending. Each of the first two
passes writes an intermediate texture for use by the next pass. The
first pass also writes to a stencil buffer in order to dramatically
reduce the number of pixels that the second pass has to examine. Also
unlike FXAA, two built-in lookup textures are required; I bundle them
into the library in compressed KTX2 format.
The [reference implementation of SMAA] is in HLSL, with abundant use of
preprocessor macros to achieve GLSL compatibility. Unfortunately, the
reference implementation predates WGSL by over a decade, so I had to
translate the HLSL to WGSL manually. As much as was reasonably possible
without sacrificing readability, I tried to translate line by line,
preserving comments, both to aid reviewing and to allow patches to the
HLSL to more easily apply to the WGSL. Most of SMAA's features are
supported, but in the interests of making this patch somewhat less huge,
I skipped a few of the more exotic ones:
* The temporal variant is currently unsupported. This is and has been
used in shipping games, so supporting temporal SMAA would be useful
follow-up work. It would, however, require some significant work on TAA
to ensure compatibility, so I opted to skip it in this patch.
* Depth- and chroma-based edge detection are unimplemented; only luma
is. Depth is lower-quality, but faster; chroma is higher-quality, but
slower. Luma is the suggested default edge detection algorithm. (Note
that depth-based edge detection wouldn't work on WebGL 2 anyway, because
of the Naga bug whereby depth sampling is miscompiled in GLSL. This is
the same bug that prevents depth of field from working on that
platform.)
* Predicated thresholding is currently unsupported.
* My implementation is incompatible with SSAA and MSAA, unlike the
original; MSAA must be turned off to use SMAA in Bevy. I believe this
feature was rarely used in practice.
The `anti_aliasing` example has been updated to allow experimentation
with and testing of the different SMAA quality presets. Along the way, I
refactored the example's help text rendering code a bit to eliminate
code repetition.
SMAA is fully supported on WebGL 2.
Fixes#9819.
[*subpixel morphological antialiasing*]: https://www.iryoku.com/smaa/
[reference implementation of SMAA]: https://github.com/iryoku/smaa
## Changelog
### Added
* Subpixel morphological antialiasing, or SMAA, is now available. To use
it, add the `SmaaSettings` component to your `Camera`.
![Screenshot 2024-05-18
134311](https://github.com/bevyengine/bevy/assets/157897/ffbd611c-1b32-4491-b2e2-e410688852ee)
---------
Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
This commit implements support for physically-based anisotropy in Bevy's
`StandardMaterial`, following the specification for the
[`KHR_materials_anisotropy`] glTF extension.
[*Anisotropy*] (not to be confused with [anisotropic filtering]) is a
PBR feature that allows roughness to vary along the tangent and
bitangent directions of a mesh. In effect, this causes the specular
light to stretch out into lines instead of a round lobe. This is useful
for modeling brushed metal, hair, and similar surfaces. Support for
anisotropy is a common feature in major game and graphics engines;
Unity, Unreal, Godot, three.js, and Blender all support it to varying
degrees.
Two new parameters have been added to `StandardMaterial`:
`anisotropy_strength` and `anisotropy_rotation`. Anisotropy strength,
which ranges from 0 to 1, represents how much the roughness differs
between the tangent and the bitangent of the mesh. In effect, it
controls how stretched the specular highlight is. Anisotropy rotation
allows the roughness direction to differ from the tangent of the model.
In addition to these two fixed parameters, an *anisotropy texture* can
be supplied. Such a texture should be a 3-channel RGB texture, where the
red and green values specify a direction vector using the same
conventions as a normal map ([0, 1] color values map to [-1, 1] vector
values), and the the blue value represents the strength. This matches
the format that the [`KHR_materials_anisotropy`] specification requires.
Such textures should be loaded as linear and not sRGB. Note that this
texture does consume one additional texture binding in the standard
material shader.
The glTF loader has been updated to properly parse the
`KHR_materials_anisotropy` extension.
A new example, `anisotropy`, has been added. This example loads and
displays the barn lamp example from the [`glTF-Sample-Assets`]
repository. Note that the textures were rather large, so I shrunk them
down and converted them to a mixture of JPEG and KTX2 format, in the
interests of saving space in the Bevy repository.
[*Anisotropy*]:
https://google.github.io/filament/Filament.md.html#materialsystem/anisotropicmodel
[anisotropic filtering]:
https://en.wikipedia.org/wiki/Anisotropic_filtering
[`KHR_materials_anisotropy`]:
https://github.com/KhronosGroup/glTF/blob/main/extensions/2.0/Khronos/KHR_materials_anisotropy/README.md
[`glTF-Sample-Assets`]:
https://github.com/KhronosGroup/glTF-Sample-Assets/
## Changelog
### Added
* Physically-based anisotropy is now available for materials, which
enhances the look of surfaces such as brushed metal or hair. glTF scenes
can use the new feature with the `KHR_materials_anisotropy` extension.
## Screenshots
With anisotropy:
![Screenshot 2024-05-20
233414](https://github.com/bevyengine/bevy/assets/157897/379f1e42-24e9-40b6-a430-f7d1479d0335)
Without anisotropy:
![Screenshot 2024-05-20
233420](https://github.com/bevyengine/bevy/assets/157897/aa220f05-b8e7-417c-9671-b242d4bf9fc4)
# Objective
- Fixes#10909
- Fixes#8492
## Solution
- Name all matrices `x_from_y`, for example `world_from_view`.
## Testing
- I've tested most of the 3D examples. The `lighting` example
particularly should hit a lot of the changes and appears to run fine.
---
## Changelog
- Renamed matrices across the engine to follow a `y_from_x` naming,
making the space conversion more obvious.
## Migration Guide
- `Frustum`'s `from_view_projection`, `from_view_projection_custom_far`
and `from_view_projection_no_far` were renamed to
`from_clip_from_world`, `from_clip_from_world_custom_far` and
`from_clip_from_world_no_far`.
- `ComputedCameraValues::projection_matrix` was renamed to
`clip_from_view`.
- `CameraProjection::get_projection_matrix` was renamed to
`get_clip_from_view` (this affects implementations on `Projection`,
`PerspectiveProjection` and `OrthographicProjection`).
- `ViewRangefinder3d::from_view_matrix` was renamed to
`from_world_from_view`.
- `PreviousViewData`'s members were renamed to `view_from_world` and
`clip_from_world`.
- `ExtractedView`'s `projection`, `transform` and `view_projection` were
renamed to `clip_from_view`, `world_from_view` and `clip_from_world`.
- `ViewUniform`'s `view_proj`, `unjittered_view_proj`,
`inverse_view_proj`, `view`, `inverse_view`, `projection` and
`inverse_projection` were renamed to `clip_from_world`,
`unjittered_clip_from_world`, `world_from_clip`, `world_from_view`,
`view_from_world`, `clip_from_view` and `view_from_clip`.
- `GpuDirectionalCascade::view_projection` was renamed to
`clip_from_world`.
- `MeshTransforms`' `transform` and `previous_transform` were renamed to
`world_from_local` and `previous_world_from_local`.
- `MeshUniform`'s `transform`, `previous_transform`,
`inverse_transpose_model_a` and `inverse_transpose_model_b` were renamed
to `world_from_local`, `previous_world_from_local`,
`local_from_world_transpose_a` and `local_from_world_transpose_b` (the
`Mesh` type in WGSL mirrors this, however `transform` and
`previous_transform` were named `model` and `previous_model`).
- `Mesh2dTransforms::transform` was renamed to `world_from_local`.
- `Mesh2dUniform`'s `transform`, `inverse_transpose_model_a` and
`inverse_transpose_model_b` were renamed to `world_from_local`,
`local_from_world_transpose_a` and `local_from_world_transpose_b` (the
`Mesh2d` type in WGSL mirrors this).
- In WGSL, in `bevy_pbr::mesh_functions`, `get_model_matrix` and
`get_previous_model_matrix` were renamed to `get_world_from_local` and
`get_previous_world_from_local`.
- In WGSL, `bevy_sprite::mesh2d_functions::get_model_matrix` was renamed
to `get_world_from_local`.
# Objective
- Plane subdivision was removed without providing an alternative
## Solution
- Add subdivision to the PlaneMeshBuilder
---
## Migration Guide
If you were using `Plane` `subdivisions`, you now need to use
`Plane3d::default().mesh().subdivisions(10)`
fixes https://github.com/bevyengine/bevy/issues/13258
# Objective
- fixes#4823
## Solution
As outlined in the discussion in the linked issue as the best current
solution, this PR adds specific GltfExtras for
- scenes
- meshes
- materials
- As it is , it is not a breaking change, I hesitated to rename the
current "GltfExtras" component to "PrimitiveGltfExtras", but that would
result in a breaking change and might be a bit confusing as to what
"primitive" that refers to.
## Testing
- I included a bare-bones example & asset (exported gltf file from
Blender) with gltf extras at all the relevant levels : scene, mesh,
material
---
## Changelog
- adds "SceneGltfExtras" injected at the scene level if any
- adds "MeshGltfExtras", injected at the mesh level if any
- adds "MaterialGltfExtras", injected at the mesh level if any: ie if a
mesh has a material that has gltf extras, the component will be injected
there.
# Objective
- Followup to #13548
- It added a list of all possible labels to documentation. This seems
hard to keep up and doesn't stop people from making spelling mistake
## Solution
- Add an enum that can create all the labels possible, and encourage its
use rather than manually typed labels
---------
Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
Co-authored-by: Rob Parrett <robparrett@gmail.com>
# Objective
- The default font size is too small to be useful in examples or for
debug text.
- Fixes#13587
## Solution
- Updated the default font size value in `TextStyle` from 12px to 24px.
- Resorted to Text defaults in examples to use the default font size in
most of them.
## Testing
- WIP
---
## Migration Guide
- The default font size has been increased to 24px from 12px. Make sure
you set the font to the appropriate values in places you were using
`Default` text style.
# Objective
- In #13542 I broke example `color_grading`: the UI is not updated to
reflect changed camera settings
- Fixes#13590.
## Solution
- Update the UI when changing color grading
## Objective
Use the "standard" text size / placement for the new text in these
examples.
Continuation of an effort started here:
https://github.com/bevyengine/bevy/pull/8478
This is definitely not comprehensive. I did the ones that were easy to
find and relatively straightforward updates. I meant to just do
`3d_shapes` and `2d_shapes`, but one thing lead to another.
## Solution
Use `font_size: 20.0`, the default (built-in) font, `Color::WHITE`
(default), and `Val::Px(12.)` from the edges of the screen.
There are a few little drive-by cleanups of defaults not being used,
etc.
## Testing
Ran the changed examples, verified that they still look reasonable.
# Objective
- In particularly dark scenes, auto-exposure would lead to an unexpected
darkening of the view.
- Fixes#13446.
## Solution
The average luminance should default to something else than 0.0 instead,
when there are no samples. We set it to `settings.min_log_lum`.
## Testing
I was able to reproduce the problem on the `auto_exposure` example by
setting the point light intensity to 2000 and looking into the
right-hand corner. There was a sudden darkening.
Now, the discontinuity is gone.
---------
Co-authored-by: Alice Cecile <alice.i.cecil@gmail.com>
Co-authored-by: Bram Buurlage <brambuurlage@gmail.com>
# Objective
- Fixes#13521
## Solution
Set `ambient_intensity` to 0.0 in volumetric_fog example.
I chose setting it explicitly over changing the default in order to make
it clear that this needs to be set depending on whether you have an
`EnvironmentMapLight`. See documentation for `ambient_intensity` and
related members.
## Testing
- Run the volumetric_fog example and notice how the light shown in
#13521 is not there anymore, as expected.
---------
Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
# Objective
- Small improvements on the `color_grading` example
## Solution
- Simplify button creation by creating them in the default state, the
selected one is automatically selected
- Don't update the UI if not needed
- Also invert the border of the selected button
- Simplify text update
This commit, a revamp of #12959, implements screen-space reflections
(SSR), which approximate real-time reflections based on raymarching
through the depth buffer and copying samples from the final rendered
frame. This patch is a relatively minimal implementation of SSR, so as
to provide a flexible base on which to customize and build in the
future. However, it's based on the production-quality [raymarching code
by Tomasz
Stachowiak](https://gist.github.com/h3r2tic/9c8356bdaefbe80b1a22ae0aaee192db).
For a general basic overview of screen-space reflections, see
[1](https://lettier.github.io/3d-game-shaders-for-beginners/screen-space-reflection.html).
The raymarching shader uses the basic algorithm of tracing forward in
large steps, refining that trace in smaller increments via binary
search, and then using the secant method. No temporal filtering or
roughness blurring, is performed at all; for this reason, SSR currently
only operates on very shiny surfaces. No acceleration via the
hierarchical Z-buffer is implemented (though note that
https://github.com/bevyengine/bevy/pull/12899 will add the
infrastructure for this). Reflections are traced at full resolution,
which is often considered slow. All of these improvements and more can
be follow-ups.
SSR is built on top of the deferred renderer and is currently only
supported in that mode. Forward screen-space reflections are possible
albeit uncommon (though e.g. *Doom Eternal* uses them); however, they
require tracing from the previous frame, which would add complexity.
This patch leaves the door open to implementing SSR in the forward
rendering path but doesn't itself have such an implementation.
Screen-space reflections aren't supported in WebGL 2, because they
require sampling from the depth buffer, which Naga can't do because of a
bug (`sampler2DShadow` is incorrectly generated instead of `sampler2D`;
this is the same reason why depth of field is disabled on that
platform).
To add screen-space reflections to a camera, use the
`ScreenSpaceReflectionsBundle` bundle or the
`ScreenSpaceReflectionsSettings` component. In addition to
`ScreenSpaceReflectionsSettings`, `DepthPrepass` and `DeferredPrepass`
must also be present for the reflections to show up. The
`ScreenSpaceReflectionsSettings` component contains several settings
that artists can tweak, and also comes with sensible defaults.
A new example, `ssr`, has been added. It's loosely based on the
[three.js ocean
sample](https://threejs.org/examples/webgl_shaders_ocean.html), but all
the assets are original. Note that the three.js demo has no screen-space
reflections and instead renders a mirror world. In contrast to #12959,
this demo tests not only a cube but also a more complex model (the
flight helmet).
## Changelog
### Added
* Screen-space reflections can be enabled for very smooth surfaces by
adding the `ScreenSpaceReflections` component to a camera. Deferred
rendering must be enabled for the reflections to appear.
![Screenshot 2024-05-18
143555](https://github.com/bevyengine/bevy/assets/157897/b8675b39-8a89-433e-a34e-1b9ee1233267)
![Screenshot 2024-05-18
143606](https://github.com/bevyengine/bevy/assets/157897/cc9e1cd0-9951-464a-9a08-e589210e5606)
# Objective
The `ConicalFrustum` primitive should support meshing.
## Solution
Implement meshing for the `ConicalFrustum` primitive. The implementation
is nearly identical to `Cylinder` meshing, but supports two radii.
The default conical frustum is equivalent to a cone with a height of 1
and a radius of 0.5, truncated at half-height.
![kuva](https://github.com/bevyengine/bevy/assets/57632562/b4cab136-ff55-4056-b818-1218e4f38845)
# Objective
Allow the `Tetrahedron` primitive to be used for mesh generation. This
is part of ongoing work to bring unify the capabilities of `bevy_math`
primitives.
## Solution
`Tetrahedron` implements `Meshable`. Essentially, each face is just
meshed as a `Triangle3d`, but first there is an inversion step when the
signed volume of the tetrahedron is negative to ensure that the faces
all actually point outward.
## Testing
I loaded up some examples and hackily exchanged existing meshes with the
new one to see that it works as expected.
# Objective
- in example `render_to_texture`, #13317 changed the comment on the
existing light saying lights don't work on multiple layers, then add a
light on multiple layers explaining that it will work. it's confusing
## Solution
- Keep the original light, with the updated comment
## Testing
- Run example `render_to_texture`, lighting is correct
# Objective
`parallax_mapping` and `deferred_rendering` both use a roundabout way of
manually overriding the srgbness of their normal map textures.
This can now be done with `load_with_settings` in one line of code.
## Solution
- Delete the override systems and use `load_with_settings` instead
- Make `deferred_rendering`'s instruction text style consistent with
other examples while I'm in there.
(see #8478)
## Testing
Tested by running with `load` instead of `load_settings` and confirming
that lighting looks bad when `is_srgb` is not configured, and good when
it is.
## Discussion
It would arguably make more sense to configure this in a `.meta` file,
but I used `load_with_settings` because that's how it was done in the
`clearcoat` example and it does seem nice for documentation purposes to
call this out explicitly in code.
This commit implements a more physically-accurate, but slower, form of
fog than the `bevy_pbr::fog` module does. Notably, this *volumetric fog*
allows for light beams from directional lights to shine through,
creating what is known as *light shafts* or *god rays*.
To add volumetric fog to a scene, add `VolumetricFogSettings` to the
camera, and add `VolumetricLight` to directional lights that you wish to
be volumetric. `VolumetricFogSettings` has numerous settings that allow
you to define the accuracy of the simulation, as well as the look of the
fog. Currently, only interaction with directional lights that have
shadow maps is supported. Note that the overhead of the effect scales
directly with the number of directional lights in use, so apply
`VolumetricLight` sparingly for the best results.
The overall algorithm, which is implemented as a postprocessing effect,
is a combination of the techniques described in [Scratchapixel] and
[this blog post]. It uses raymarching in screen space, transformed into
shadow map space for sampling and combined with physically-based
modeling of absorption and scattering. Bevy employs the widely-used
[Henyey-Greenstein phase function] to model asymmetry; this essentially
allows light shafts to fade into and out of existence as the user views
them.
Volumetric rendering is a huge subject, and I deliberately kept the
scope of this commit small. Possible follow-ups include:
1. Raymarching at a lower resolution.
2. A post-processing blur (especially useful when combined with (1)).
3. Supporting point lights and spot lights.
4. Supporting lights with no shadow maps.
5. Supporting irradiance volumes and reflection probes.
6. Voxel components that reuse the volumetric fog code to create voxel
shapes.
7. *Horizon: Zero Dawn*-style clouds.
These are all useful, but out of scope of this patch for now, to keep
things tidy and easy to review.
A new example, `volumetric_fog`, has been added to demonstrate the
effect.
## Changelog
### Added
* A new component, `VolumetricFog`, is available, to allow for a more
physically-accurate, but more resource-intensive, form of fog.
* A new component, `VolumetricLight`, can be placed on directional
lights to make them interact with `VolumetricFog`. Notably, this allows
such lights to emit light shafts/god rays.
![Screenshot 2024-04-21
162808](https://github.com/bevyengine/bevy/assets/157897/7a1fc81d-eed5-4735-9419-286c496391a9)
![Screenshot 2024-04-21
132005](https://github.com/bevyengine/bevy/assets/157897/e6d3b5ca-8f59-488d-a3de-15e95aaf4995)
[Scratchapixel]:
https://www.scratchapixel.com/lessons/3d-basic-rendering/volume-rendering-for-developers/intro-volume-rendering.html
[this blog post]: https://www.alexandre-pestana.com/volumetric-lights/
[Henyey-Greenstein phase function]:
https://www.pbr-book.org/4ed/Volume_Scattering/Phase_Functions#TheHenyeyndashGreensteinPhaseFunction
# Objective
Remove the limit of `RenderLayer` by using a growable mask using
`SmallVec`.
Changes adopted from @UkoeHB's initial PR here
https://github.com/bevyengine/bevy/pull/12502 that contained additional
changes related to propagating render layers.
Changes
## Solution
The main thing needed to unblock this is removing `RenderLayers` from
our shader code. This primarily affects `DirectionalLight`. We are now
computing a `skip` field on the CPU that is then used to skip the light
in the shader.
## Testing
Checked a variety of examples and did a quick benchmark on `many_cubes`.
There were some existing problems identified during the development of
the original pr (see:
https://discord.com/channels/691052431525675048/1220477928605749340/1221190112939872347).
This PR shouldn't change any existing behavior besides removing the
layer limit (sans the comment in migration about `all` layers no longer
being possible).
---
## Changelog
Removed the limit on `RenderLayers` by using a growable bitset that only
allocates when layers greater than 64 are used.
## Migration Guide
- `RenderLayers::all()` no longer exists. Entities expecting to be
visible on all layers, e.g. lights, should compute the active layers
that are in use.
---------
Co-authored-by: robtfm <50659922+robtfm@users.noreply.github.com>
This commit implements the [depth of field] effect, simulating the blur
of objects out of focus of the virtual lens. Either the [hexagonal
bokeh] effect or a faster Gaussian blur may be used. In both cases, the
implementation is a simple separable two-pass convolution. This is not
the most physically-accurate real-time bokeh technique that exists;
Unreal Engine has [a more accurate implementation] of "cinematic depth
of field" from 2018. However, it's simple, and most engines provide
something similar as a fast option, often called "mobile" depth of
field.
The general approach is outlined in [a blog post from 2017]. We take
advantage of the fact that both Gaussian blurs and hexagonal bokeh blurs
are *separable*. This means that their 2D kernels can be reduced to a
small number of 1D kernels applied one after another, asymptotically
reducing the amount of work that has to be done. Gaussian blurs can be
accomplished by blurring horizontally and then vertically, while
hexagonal bokeh blurs can be done with a vertical blur plus a diagonal
blur, plus two diagonal blurs. In both cases, only two passes are
needed. Bokeh requires the first pass to have a second render target and
requires two subpasses in the second pass, which decreases its
performance relative to the Gaussian blur.
The bokeh blur is generally more aesthetically pleasing than the
Gaussian blur, as it simulates the effect of a camera more accurately.
The shape of the bokeh circles are determined by the number of blades of
the aperture. In our case, we use a hexagon, which is usually considered
specific to lower-quality cameras. (This is a downside of the fast
hexagon approach compared to the higher-quality approaches.) The blur
amount is generally specified by the [f-number], which we use to compute
the focal length from the film size and FOV. By default, we simulate
standard cinematic cameras of f/1 and [Super 35]. The developer can
customize these values as desired.
A new example has been added to demonstrate depth of field. It allows
customization of the mode (Gaussian vs. bokeh), focal distance and
f-numbers. The test scene is inspired by a [blog post on depth of field
in Unity]; however, the effect is implemented in a completely different
way from that blog post, and all the assets (textures, etc.) are
original.
Bokeh depth of field:
![Screenshot 2024-04-17
152535](https://github.com/bevyengine/bevy/assets/157897/702f0008-1c8a-4cf3-b077-4110f8c46584)
Gaussian depth of field:
![Screenshot 2024-04-17
152542](https://github.com/bevyengine/bevy/assets/157897/f4ece47a-520e-4483-a92d-f4fa760795d3)
No depth of field:
![Screenshot 2024-04-17
152547](https://github.com/bevyengine/bevy/assets/157897/9444e6aa-fcae-446c-b66b-89469f1a1325)
[depth of field]: https://en.wikipedia.org/wiki/Depth_of_field
[hexagonal bokeh]:
https://colinbarrebrisebois.com/2017/04/18/hexagonal-bokeh-blur-revisited/
[a more accurate implementation]:
https://epicgames.ent.box.com/s/s86j70iamxvsuu6j35pilypficznec04
[a blog post from 2017]:
https://colinbarrebrisebois.com/2017/04/18/hexagonal-bokeh-blur-revisited/
[f-number]: https://en.wikipedia.org/wiki/F-number
[Super 35]: https://en.wikipedia.org/wiki/Super_35
[blog post on depth of field in Unity]:
https://catlikecoding.com/unity/tutorials/advanced-rendering/depth-of-field/
## Changelog
### Added
* A depth of field postprocessing effect is now available, to simulate
objects being out of focus of the camera. To use it, add
`DepthOfFieldSettings` to an entity containing a `Camera3d` component.
---------
Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
Co-authored-by: Bram Buurlage <brambuurlage@gmail.com>
# Objective
The `Cone` primitive should support meshing.
## Solution
Implement meshing for the `Cone` primitive. The default cone has a
height of 1 and a base radius of 0.5, and is centered at the origin.
An issue with cone meshes is that the tip does not really have a normal
that works, even with duplicated vertices. This PR uses only a single
vertex for the tip, with a normal of zero; this results in an "invalid"
normal that gets ignored by the fragment shader. This seems to be the
only approach we have for perfectly smooth cones. For discussion on the
topic, see #10298 and #5891.
Another thing to note is that the cone uses polar coordinates for the
UVs:
<img
src="https://github.com/bevyengine/bevy/assets/57632562/e101ded9-110a-4ac4-a98d-f1e4d740a24a"
alt="cone" width="400" />
This way, textures are applied as if looking at the cone from above:
<img
src="https://github.com/bevyengine/bevy/assets/57632562/8dea00f1-a283-4bc4-9676-91e8d4adb07a"
alt="texture" width="200" />
<img
src="https://github.com/bevyengine/bevy/assets/57632562/d9d1b5e6-a8ba-4690-b599-904dd85777a1"
alt="cone" width="200" />
# Objective
Fixes#13097 and other issues preventing the motion blur example from
working on wasm
## Solution
- Use a vec2 for padding
- Fix error initializing the `MotionBlur` struct on wasm+webgl2
- Disable MSAA on wasm+webgl2
- Fix `GlobalsUniform` padding getting added on the shader side for
webgpu builds
## Notes
The motion blur example now runs, but with artifacts. In addition to the
obvious black artifacts, the motion blur or dithering seem to just look
worse in a way I can't really describe. That may be expected.
```
AdapterInfo { name: "ANGLE (Apple, ANGLE Metal Renderer: Apple M1 Max, Unspecified Version)", vendor: 4203, device: 0, device_type: IntegratedGpu, driver: "", driver_info: "", backend: Gl }
```
<img width="1276" alt="Screenshot 2024-04-25 at 6 51 21 AM"
src="https://github.com/bevyengine/bevy/assets/200550/65401d4f-92fe-454b-9dbc-a2d89d3ad963">
Switched the return type from `Vec3` to `Dir3` for directional axis
methods within the `GlobalTransform` component.
## Migration Guide
The `GlobalTransform` component's directional axis methods (e.g.,
`right()`, `left()`, `up()`, `down()`, `back()`, `forward()`) have been
updated from returning `Vec3` to `Dir3`.
Clearcoat is a separate material layer that represents a thin
translucent layer of a material. Examples include (from the [Filament
spec]) car paint, soda cans, and lacquered wood. This commit implements
support for clearcoat following the Filament and Khronos specifications,
marking the beginnings of support for multiple PBR layers in Bevy.
The [`KHR_materials_clearcoat`] specification describes the clearcoat
support in glTF. In Blender, applying a clearcoat to the Principled BSDF
node causes the clearcoat settings to be exported via this extension. As
of this commit, Bevy parses and reads the extension data when present in
glTF. Note that the `gltf` crate has no support for
`KHR_materials_clearcoat`; this patch therefore implements the JSON
semantics manually.
Clearcoat is integrated with `StandardMaterial`, but the code is behind
a series of `#ifdef`s that only activate when clearcoat is present.
Additionally, the `pbr_feature_layer_material_textures` Cargo feature
must be active in order to enable support for clearcoat factor maps,
clearcoat roughness maps, and clearcoat normal maps. This approach
mirrors the same pattern used by the existing transmission feature and
exists to avoid running out of texture bindings on platforms like WebGL
and WebGPU. Note that constant clearcoat factors and roughness values
*are* supported in the browser; only the relatively-less-common maps are
disabled on those platforms.
This patch refactors the lighting code in `StandardMaterial`
significantly in order to better support multiple layers in a natural
way. That code was due for a refactor in any case, so this is a nice
improvement.
A new demo, `clearcoat`, has been added. It's based on [the
corresponding three.js demo], but all the assets (aside from the skybox
and environment map) are my original work.
[Filament spec]:
https://google.github.io/filament/Filament.html#materialsystem/clearcoatmodel
[`KHR_materials_clearcoat`]:
https://github.com/KhronosGroup/glTF/blob/main/extensions/2.0/Khronos/KHR_materials_clearcoat/README.md
[the corresponding three.js demo]:
https://threejs.org/examples/webgl_materials_physical_clearcoat.html
![Screenshot 2024-04-19
101143](https://github.com/bevyengine/bevy/assets/157897/3444bcb5-5c20-490c-b0ad-53759bd47ae2)
![Screenshot 2024-04-19
102054](https://github.com/bevyengine/bevy/assets/157897/6e953944-75b8-49ef-bc71-97b0a53b3a27)
## Changelog
### Added
* `StandardMaterial` now supports a clearcoat layer, which represents a
thin translucent layer over an underlying material.
* The glTF loader now supports the `KHR_materials_clearcoat` extension,
representing materials with clearcoat layers.
## Migration Guide
* The lighting functions in the `pbr_lighting` WGSL module now have
clearcoat parameters, if `STANDARD_MATERIAL_CLEARCOAT` is defined.
* The `R` reflection vector parameter has been removed from some
lighting functions, as it was unused.
# Objective
- Add auto exposure/eye adaptation to the bevy render pipeline.
- Support features that users might expect from other engines:
- Metering masks
- Compensation curves
- Smooth exposure transitions
This PR is based on an implementation I already built for a personal
project before https://github.com/bevyengine/bevy/pull/8809 was
submitted, so I wasn't able to adopt that PR in the proper way. I've
still drawn inspiration from it, so @fintelia should be credited as
well.
## Solution
An auto exposure compute shader builds a 64 bin histogram of the scene's
luminance, and then adjusts the exposure based on that histogram. Using
a histogram allows the system to ignore outliers like shadows and
specular highlights, and it allows to give more weight to certain areas
based on a mask.
---
## Changelog
- Added: AutoExposure plugin that allows to adjust a camera's exposure
based on it's scene's luminance.
---------
Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
Implement visibility ranges, also known as hierarchical levels of detail
(HLODs).
This commit introduces a new component, `VisibilityRange`, which allows
developers to specify camera distances in which meshes are to be shown
and hidden. Hiding meshes happens early in the rendering pipeline, so
this feature can be used for level of detail optimization. Additionally,
this feature is properly evaluated per-view, so different views can show
different levels of detail.
This feature differs from proper mesh LODs, which can be implemented
later. Engines generally implement true mesh LODs later in the pipeline;
they're typically more efficient than HLODs with GPU-driven rendering.
However, mesh LODs are more limited than HLODs, because they require the
lower levels of detail to be meshes with the same vertex layout and
shader (and perhaps the same material) as the original mesh. Games often
want to use objects other than meshes to replace distant models, such as
*octahedral imposters* or *billboard imposters*.
The reason why the feature is called *hierarchical level of detail* is
that HLODs can replace multiple meshes with a single mesh when the
camera is far away. This can be useful for reducing drawcall count. Note
that `VisibilityRange` doesn't automatically propagate down to children;
it must be placed on every mesh.
Crossfading between different levels of detail is supported, using the
standard 4x4 ordered dithering pattern from [1]. The shader code to
compute the dithering patterns should be well-optimized. The dithering
code is only active when visibility ranges are in use for the mesh in
question, so that we don't lose early Z.
Cascaded shadow maps show the HLOD level of the view they're associated
with. Point light and spot light shadow maps, which have no CSMs,
display all HLOD levels that are visible in any view. To support this
efficiently and avoid doing visibility checks multiple times, we
precalculate all visible HLOD levels for each entity with a
`VisibilityRange` during the `check_visibility_range` system.
A new example, `visibility_range`, has been added to the tree, as well
as a new low-poly version of the flight helmet model to go with it. It
demonstrates use of the visibility range feature to provide levels of
detail.
[1]: https://en.wikipedia.org/wiki/Ordered_dithering#Threshold_map
[^1]: Unreal doesn't have a feature that exactly corresponds to
visibility ranges, but Unreal's HLOD system serves roughly the same
purpose.
## Changelog
### Added
* A new `VisibilityRange` component is available to conditionally enable
entity visibility at camera distances, with optional crossfade support.
This can be used to implement different levels of detail (LODs).
## Screenshots
High-poly model:
![Screenshot 2024-04-09
185541](https://github.com/bevyengine/bevy/assets/157897/7e8be017-7187-4471-8866-974e2d8f2623)
Low-poly model up close:
![Screenshot 2024-04-09
185546](https://github.com/bevyengine/bevy/assets/157897/429603fe-6bb7-4246-8b4e-b4888fd1d3a0)
Crossfading between the two:
![Screenshot 2024-04-09
185604](https://github.com/bevyengine/bevy/assets/157897/86d0d543-f8f3-49ec-8fe5-caa4d0784fd4)
---------
Co-authored-by: Carter Anderson <mcanders1@gmail.com>
# Objective
- #12755 introduced the need to download a file to run an example
- This means the example fails to run in CI without downloading that
file
## Solution
- Add a new metadata to examples "setup" that provides setup
instructions
- Replace the URL in the meshlet example to one that can actually be
downloaded
- example-showcase execute the setup before running an example
This commit expands Bevy's existing tonemapping feature to a complete
set of filmic color grading tools, matching those of engines like Unity,
Unreal, and Godot. The following features are supported:
* White point adjustment. This is inspired by Unity's implementation of
the feature, but simplified and optimized. *Temperature* and *tint*
control the adjustments to the *x* and *y* chromaticity values of [CIE
1931]. Following Unity, the adjustments are made relative to the [D65
standard illuminant] in the [LMS color space].
* Hue rotation. This simply converts the RGB value to [HSV], alters the
hue, and converts back.
* Color correction. This allows the *gamma*, *gain*, and *lift* values
to be adjusted according to the standard [ASC CDL combined function].
* Separate color correction for shadows, midtones, and highlights.
Blender's source code was used as a reference for the implementation of
this. The midtone ranges can be adjusted by the user. To avoid abrupt
color changes, a small crossfade is used between the different sections
of the image, again following Blender's formulas.
A new example, `color_grading`, has been added, offering a GUI to change
all the color grading settings. It uses the same test scene as the
existing `tonemapping` example, which has been factored out into a
shared glTF scene.
[CIE 1931]: https://en.wikipedia.org/wiki/CIE_1931_color_space
[D65 standard illuminant]:
https://en.wikipedia.org/wiki/Standard_illuminant#Illuminant_series_D
[LMS color space]: https://en.wikipedia.org/wiki/LMS_color_space
[HSV]: https://en.wikipedia.org/wiki/HSL_and_HSV
[ASC CDL combined function]:
https://en.wikipedia.org/wiki/ASC_CDL#Combined_Function
## Changelog
### Added
* Many new filmic color grading options have been added to the
`ColorGrading` component.
## Migration Guide
* `ColorGrading::gamma` and `ColorGrading::pre_saturation` are now set
separately for the `shadows`, `midtones`, and `highlights` sections. You
can migrate code with the `ColorGrading::all_sections` and
`ColorGrading::all_sections_mut` functions, which access and/or update
all sections at once.
* `ColorGrading::post_saturation` and `ColorGrading::exposure` are now
fields of `ColorGrading::global`.
## Screenshots
![Screenshot 2024-04-27
143144](https://github.com/bevyengine/bevy/assets/157897/c1de5894-917d-4101-b5c9-e644d141a941)
![Screenshot 2024-04-27
143216](https://github.com/bevyengine/bevy/assets/157897/da393c8a-d747-42f5-b47c-6465044c788d)
This commit implements opt-in GPU frustum culling, built on top of the
infrastructure in https://github.com/bevyengine/bevy/pull/12773. To
enable it on a camera, add the `GpuCulling` component to it. To
additionally disable CPU frustum culling, add the `NoCpuCulling`
component. Note that adding `GpuCulling` without `NoCpuCulling`
*currently* does nothing useful. The reason why `GpuCulling` doesn't
automatically imply `NoCpuCulling` is that I intend to follow this patch
up with GPU two-phase occlusion culling, and CPU frustum culling plus
GPU occlusion culling seems like a very commonly-desired mode.
Adding the `GpuCulling` component to a view puts that view into
*indirect mode*. This mode makes all drawcalls indirect, relying on the
mesh preprocessing shader to allocate instances dynamically. In indirect
mode, the `PreprocessWorkItem` `output_index` points not to a
`MeshUniform` instance slot but instead to a set of `wgpu`
`IndirectParameters`, from which it allocates an instance slot
dynamically if frustum culling succeeds. Batch building has been updated
to allocate and track indirect parameter slots, and the AABBs are now
supplied to the GPU as `MeshCullingData`.
A small amount of code relating to the frustum culling has been borrowed
from meshlets and moved into `maths.wgsl`. Note that standard Bevy
frustum culling uses AABBs, while meshlets use bounding spheres; this
means that not as much code can be shared as one might think.
This patch doesn't provide any way to perform GPU culling on shadow
maps, to avoid making this patch bigger than it already is. That can be
a followup.
## Changelog
### Added
* Frustum culling can now optionally be done on the GPU. To enable it,
add the `GpuCulling` component to a camera.
* To disable CPU frustum culling, add `NoCpuCulling` to a camera. Note
that `GpuCulling` doesn't automatically imply `NoCpuCulling`.
Keeping track of explicit visibility per cluster between frames does not
work with LODs, and leads to worse culling (using the final depth buffer
from the previous frame is more accurate).
Instead, we need to generate a second depth pyramid after the second
raster pass, and then use that in the first culling pass in the next
frame to test if a cluster would have been visible last frame or not.
As part of these changes, the write_index_buffer pass has been folded
into the culling pass for a large performance gain, and to avoid
tracking a lot of extra state that would be needed between passes.
Prepass previous model/view stuff was adapted to work with meshlets as
well.
Also fixed a bug with materials, and other misc improvements.
---------
Co-authored-by: François <mockersf@gmail.com>
Co-authored-by: atlas dostal <rodol@rivalrebels.com>
Co-authored-by: vero <email@atlasdostal.com>
Co-authored-by: Patrick Walton <pcwalton@mimiga.net>
Co-authored-by: Robert Swain <robert.swain@gmail.com>
https://github.com/bevyengine/bevy/assets/2632925/e046205e-3317-47c3-9959-fc94c529f7e0
# Objective
- Adds per-object motion blur to the core 3d pipeline. This is a common
effect used in games and other simulations.
- Partially resolves#4710
## Solution
- This is a post-process effect that uses the depth and motion vector
buffers to estimate per-object motion blur. The implementation is
combined from knowledge from multiple papers and articles. The approach
itself, and the shader are quite simple. Most of the effort was in
wiring up the bevy rendering plumbing, and properly specializing for HDR
and MSAA.
- To work with MSAA, the MULTISAMPLED_SHADING wgpu capability is
required. I've extracted this code from #9000. This is because the
prepass buffers are multisampled, and require accessing with
`textureLoad` as opposed to the widely compatible `textureSample`.
- Added an example to demonstrate the effect of motion blur parameters.
## Future Improvements
- While this approach does have limitations, it's one of the most
commonly used, and is much better than camera motion blur, which does
not consider object velocity. For example, this implementation allows a
dolly to track an object, and that object will remain unblurred while
the background is blurred. The biggest issue with this implementation is
that blur is constrained to the boundaries of objects which results in
hard edges. There are solutions to this by either dilating the object or
the motion vector buffer, or by taking a different approach such as
https://casual-effects.com/research/McGuire2012Blur/index.html
- I'm using a noise PRNG function to jitter samples. This could be
replaced with a blue noise texture lookup or similar, however after
playing with the parameters, it gives quite nice results with 4 samples,
and is significantly better than the artifacts generated when not
jittering.
---
## Changelog
- Added: per-object motion blur. This can be enabled and configured by
adding the `MotionBlurBundle` to a camera entity.
---------
Co-authored-by: Torstein Grindvik <52322338+torsteingrindvik@users.noreply.github.com>
# Objective
Clarify the comment about the camera's coordinate system in
`examples/3d/generate_custom_mesh.rs` by explicitly stating which axes
point where.
Fixes#13018
## Solution
Copy the wording from #13012 into the example.
# Objective
- The docs says the WireframeColor is supposed to override the default
global color but it doesn't.
## Solution
- Use WireframeColor to override global color like docs said it was
supposed to do.
- Updated the example to document this feature
- I also took the opportunity to clean up the code a bit
Fixes#13032
# Objective
When learning about creating meshes in bevy using this example I
couldn't tell which coordinate system bevy uses, which caused confusion
and having to look it up else where.
## Solution
Add a comment that says what coordinate system bevy uses.
[Alpha to coverage] (A2C) replaces alpha blending with a
hardware-specific multisample coverage mask when multisample
antialiasing is in use. It's a simple form of [order-independent
transparency] that relies on MSAA. ["Anti-aliased Alpha Test: The
Esoteric Alpha To Coverage"] is a good summary of the motivation for and
best practices relating to A2C.
This commit implements alpha to coverage support as a new variant for
`AlphaMode`. You can supply `AlphaMode::AlphaToCoverage` as the
`alpha_mode` field in `StandardMaterial` to use it. When in use, the
standard material shader automatically applies the texture filtering
method from ["Anti-aliased Alpha Test: The Esoteric Alpha To Coverage"].
Objects with alpha-to-coverage materials are binned in the opaque pass,
as they're fully order-independent.
The `transparency_3d` example has been updated to feature an object with
alpha to coverage. Happily, the example was already using MSAA.
This is part of #2223, as far as I can tell.
[Alpha to coverage]: https://en.wikipedia.org/wiki/Alpha_to_coverage
[order-independent transparency]:
https://en.wikipedia.org/wiki/Order-independent_transparency
["Anti-aliased Alpha Test: The Esoteric Alpha To Coverage"]:
https://bgolus.medium.com/anti-aliased-alpha-test-the-esoteric-alpha-to-coverage-8b177335ae4f
---
## Changelog
### Added
* The `AlphaMode` enum now supports `AlphaToCoverage`, to provide
limited order-independent transparency when multisample antialiasing is
in use.
I ported the two existing PCF techniques to the cubemap domain as best I
could. Generally, the technique is to create a 2D orthonormal basis
using Gram-Schmidt normalization, then apply the technique over that
basis. The results look fine, though the shadow bias often needs
adjusting.
For comparison, Unity uses a 4-tap pattern for PCF on point lights of
(1, 1, 1), (-1, -1, 1), (-1, 1, -1), (1, -1, -1). I tried this but
didn't like the look, so I went with the design above, which ports the
2D techniques to the 3D domain. There's surprisingly little material on
point light PCF.
I've gone through every example using point lights and verified that the
shadow maps look fine, adjusting biases as necessary.
Fixes#3628.
---
## Changelog
### Added
* Shadows from point lights now support percentage-closer filtering
(PCF), and as a result look less aliased.
### Changed
* `ShadowFilteringMethod::Castano13` and
`ShadowFilteringMethod::Jimenez14` have been renamed to
`ShadowFilteringMethod::Gaussian` and `ShadowFilteringMethod::Temporal`
respectively.
## Migration Guide
* `ShadowFilteringMethod::Castano13` and
`ShadowFilteringMethod::Jimenez14` have been renamed to
`ShadowFilteringMethod::Gaussian` and `ShadowFilteringMethod::Temporal`
respectively.
# Objective
- As @james7132 said [on
Discord](https://discord.com/channels/691052431525675048/692572690833473578/1224626740773523536),
the `close_on_esc` system is forcing `bevy_window` to depend on
`bevy_input`.
- `close_on_esc` is not likely to be used in production, so it arguably
does not have a place in `bevy_window`.
## Solution
- As suggested by @afonsolage, move `close_on_esc` into
`bevy_dev_tools`.
- Add an example to the documentation too.
- Remove `bevy_window`'s dependency on `bevy_input`.
- Add `bevy_reflect`'s `smol_str` feature to `bevy_window` because it
was implicitly depended upon with `bevy_input` before it was removed.
- Remove any usage of `close_on_esc` from the examples.
- `bevy_dev_tools` is not enabled by default. I personally find it
frustrating to run examples with additional features, so I opted to
remove it entirely.
- This is up for discussion if you have an alternate solution.
---
## Changelog
- Moved `bevy_window::close_on_esc` to `bevy_dev_tools::close_on_esc`.
- Removed usage of `bevy_dev_tools::close_on_esc` from all examples.
## Migration Guide
`bevy_window::close_on_esc` has been moved to
`bevy_dev_tools::close_on_esc`. You will first need to enable
`bevy_dev_tools` as a feature in your `Cargo.toml`:
```toml
[dependencies]
bevy = { version = "0.14", features = ["bevy_dev_tools"] }
```
Finally, modify any imports to use `bevy_dev_tools` instead:
```rust
// Old:
// use bevy:🪟:close_on_esc;
// New:
use bevy::dev_tools::close_on_esc;
App::new()
.add_systems(Update, close_on_esc)
// ...
.run();
```
# Objective
- There are several redundant imports in the tests and examples that are
not caught by CI because additional flags need to be passed.
## Solution
- Run `cargo check --workspace --tests` and `cargo check --workspace
--examples`, then fix all warnings.
- Add `test-check` to CI, which will be run in the check-compiles job.
This should catch future warnings for tests. Examples are already
checked, but I'm not yet sure why they weren't caught.
## Discussion
- Should the `--tests` and `--examples` flags be added to CI, so this is
caught in the future?
- If so, #12818 will need to be merged first. It was also a warning
raised by checking the examples, but I chose to split off into a
separate PR.
---------
Co-authored-by: François Mockers <francois.mockers@vleue.com>
# Objective
- Since #12453, `DeterministicRenderingConfig` doesn't do anything
## Solution
- Remove it
---
## Migration Guide
- Removed `DeterministicRenderingConfig`. There shouldn't be any z
fighting anymore in the rendering even without setting
`stable_sort_z_fighting`
# Objective
Fixes issue #12613 - the RNG used in examples is _deterministic_, but
its implementation is not _portable_ across platforms. We want to switch
to using a portable RNG that does not vary across platforms, to ensure
certain examples play out the same way every time.
## Solution
Replace all occurences of `rand::rngs::StdRng` with
`rand_chacha::ChaCha8Rng`, as recommended in issue #12613
---
## Changelog
- Add `rand_chacha` as a new dependency (controversial?)
- Replace all occurences of `rand::rngs::StdRng` with
`rand_chacha::ChaCha8Rng`
# Objective
Fixes#12200 .
## Solution
I added a Hue Trait with the rotate_hue method to enable hue rotation.
Additionally, I modified the implementation of animations in the
animated_material sample.
---
## Changelog
- Added a `Hue` trait to `bevy_color/src/color_ops.rs`.
- Added the `Hue` trait implementation to `Hsla`, `Hsva`, `Hwba`,
`Lcha`, and `Oklcha`.
- Updated animated_material sample.
## Migration Guide
Users of Oklcha need to change their usage to use the with_hue method
instead of the with_h method.
---------
Co-authored-by: Pablo Reinhardt <126117294+pablo-lua@users.noreply.github.com>
Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
# Objective
I was wondering why the `lighting` example was still looking quite
different lately (specifically, the intensity of the green light on the
cube) and noticed that we had one more color change I didn't catch
before.
Prior to the `bevy_color` port, `PINK` was actually "deep pink" from the
css4 spec.
`palettes::css::PINK` is now correctly a lighter pink color defined by
the same spec.
```rust
// Bevy 0.13
pub const PINK: Color = Color::rgb(1.0, 0.08, 0.58);
// Bevy 0.14-dev
pub const PINK: Srgba = Srgba::new(1.0, 0.753, 0.796, 1.0);
pub const DEEP_PINK: Srgba = Srgba::new(1.0, 0.078, 0.576, 1.0);
```
## Solution
Change usages of `css::PINK` to `DEEP_PINK` to restore the examples to
their former colors.
This is an implementation of RFC #51:
https://github.com/bevyengine/rfcs/blob/main/rfcs/51-animation-composition.md
Note that the implementation strategy is different from the one outlined
in that RFC, because two-phase animation has now landed.
# Objective
Bevy needs animation blending. The RFC for this is [RFC 51].
## Solution
This is an implementation of the RFC. Note that the implementation
strategy is different from the one outlined there, because two-phase
animation has now landed.
This is just a draft to get the conversation started. Currently we're
missing a few things:
- [x] A fully-fleshed-out mechanism for transitions
- [x] A serialization format for `AnimationGraph`s
- [x] Examples are broken, other than `animated_fox`
- [x] Documentation
---
## Changelog
### Added
* The `AnimationPlayer` has been reworked to support blending multiple
animations together through an `AnimationGraph`, and as such will no
longer function unless a `Handle<AnimationGraph>` has been added to the
entity containing the player. See [RFC 51] for more details.
* Transition functionality has moved from the `AnimationPlayer` to a new
component, `AnimationTransitions`, which works in tandem with the
`AnimationGraph`.
## Migration Guide
* `AnimationPlayer`s can no longer play animations by themselves and
need to be paired with a `Handle<AnimationGraph>`. Code that was using
`AnimationPlayer` to play animations will need to create an
`AnimationGraph` asset first, add a node for the clip (or clips) you
want to play, and then supply the index of that node to the
`AnimationPlayer`'s `play` method.
* The `AnimationPlayer::play_with_transition()` method has been removed
and replaced with the `AnimationTransitions` component. If you were
previously using `AnimationPlayer::play_with_transition()`, add all
animations that you were playing to the `AnimationGraph`, and create an
`AnimationTransitions` component to manage the blending between them.
[RFC 51]:
https://github.com/bevyengine/rfcs/blob/main/rfcs/51-animation-composition.md
---------
Co-authored-by: Rob Parrett <robparrett@gmail.com>
# Objective
- With the recent lighting changes, the default configuration in the
`bloom_3d` example is less clear what bloom actually does
- See [this
screenshot](4fdb1455d5 (r1494648414))
for a comparison.
- `bloom_3d` additionally uses a for-loop to spawn the spheres, which
can be turned into `commands::spawn_batch` call.
- The text is black, which is difficult to see on the gray background.
## Solution
- Increase emmisive values of materials.
- Set text to white.
## Showcase
Before:
<img width="1392" alt="before"
src="https://github.com/bevyengine/bevy/assets/59022059/757057ad-ed9f-4eed-b135-8e2032fcdeb5">
After:
<img width="1392" alt="image"
src="https://github.com/bevyengine/bevy/assets/59022059/3f9dc7a8-94b2-44b9-8ac3-deef1905221b">
---------
Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
# Objective
Fixes#12226
Prior to the `bevy_color` port, `DARK GRAY` used to mean "dark grey."
But it is now lighter than `GRAY`, matching the css4 spec.
## Solution
Change usages of `css::DARK_GRAY` to `Color::srgb(0.25, 0.25, 0.25)` to
restore the examples to their former colors.
With one exception: `display_and_visibility`. I think the new color is
an improvement.
## Note
A lot of these examples could use nicer colors. I'm not trying to revamp
everything here.
The css4 palette is truly a horror. See #12176 and #12080 for some
discussion about alternatives.
# Objective
Fixes#12225
Prior to the `bevy_color` port, `GREEN` used to mean "full green." But
it is now a much darker color matching the css1 spec.
## Solution
Change usages of `basic::GREEN` or `css::GREEN` to `LIME` to restore the
examples to their former colors.
This also removes the duplicate definition of `GREEN` from `css`. (it
was already re-exported from `basic`)
## Note
A lot of these examples could use nicer colors. I'm not trying to do
that here.
"Dark Grey" will be tackled separately and has its own tracking issue.
# Objective
Fixes https://github.com/bevyengine/bevy/issues/11157.
## Solution
Stop using `BackgroundColor` as a color tint for `UiImage`. Add a
`UiImage::color` field for color tint instead. Allow a UI node to
simultaneously include a solid-color background and an image, with the
image rendered on top of the background (this is already how it works
for e.g. text).
![2024-02-29_1709239666_563x520](https://github.com/bevyengine/bevy/assets/12173779/ec50c9ef-4c7f-4ab8-a457-d086ce5b3425)
---
## Changelog
- The `BackgroundColor` component now renders a solid-color background
behind `UiImage` instead of tinting its color.
- Removed `BackgroundColor` from `ImageBundle`, `AtlasImageBundle`, and
`ButtonBundle`.
- Added `UiImage::color`.
- Expanded `RenderUiSystem` variants.
- Renamed `bevy_ui::extract_text_uinodes` to `extract_uinodes_text` for
consistency.
## Migration Guide
- `BackgroundColor` no longer tints the color of UI images. Use
`UiImage::color` for that instead.
- For solid color buttons, replace `ButtonBundle { background_color:
my_color.into(), ... }` with `ButtonBundle { image:
UiImage::default().with_color(my_color), ... }`, and update button
interaction systems to use `UiImage::color` instead of `BackgroundColor`
as well.
- `bevy_ui::RenderUiSystem::ExtractNode` has been split into
`ExtractBackgrounds`, `ExtractImages`, `ExtractBorders`, and
`ExtractText`.
- `bevy_ui::extract_uinodes` has been split into
`bevy_ui::extract_uinode_background_colors` and
`bevy_ui::extract_uinode_images`.
- `bevy_ui::extract_text_uinodes` has been renamed to
`extract_uinode_text`.
This is an implementation within `bevy_window::window` that fixes
#12229.
# Objective
Fixes#12229, allow users to retrieve the window's size and physical
size as Vectors without having to manually construct them using
`height()` and `width()` or `physical_height()` and `physical_width()`
## Solution
As suggested in #12229, created two public functions within `window`:
`size() -> Vec` and `physical_size() -> UVec` that return the needed
Vectors ready-to-go.
### Discussion
My first FOSS PRQ ever, so bear with me a bit. I'm new to this.
- I replaced instances of ```Vec2::new(window.width(),
window.height());``` or `UVec2::new(window.physical_width(),
window.physical_height());` within bevy examples be replaced with their
`size()`/`physical_size()` counterparts?
- Discussion within #12229 still holds: should these also be added to
WindowResolution?
Although we cached hashes of `MeshVertexBufferLayout`, we were paying
the cost of `PartialEq` on `InnerMeshVertexBufferLayout` for every
entity, every frame. This patch changes that logic to place
`MeshVertexBufferLayout`s in `Arc`s so that they can be compared and
hashed by pointer. This results in a 28% speedup in the
`queue_material_meshes` phase of `many_cubes`, with frustum culling
disabled.
Additionally, this patch contains two minor changes:
1. This commit flattens the specialized mesh pipeline cache to one level
of hash tables instead of two. This saves a hash lookup.
2. The example `many_cubes` has been given a `--no-frustum-culling`
flag, to aid in benchmarking.
See the Tracy profile:
<img width="1064" alt="Screenshot 2024-02-29 144406"
src="https://github.com/bevyengine/bevy/assets/157897/18632f1d-1fdd-4ac7-90ed-2d10306b2a1e">
## Migration guide
* Duplicate `MeshVertexBufferLayout`s are now combined into a single
object, `MeshVertexBufferLayoutRef`, which contains an
atomically-reference-counted pointer to the layout. Code that was using
`MeshVertexBufferLayout` may need to be updated to use
`MeshVertexBufferLayoutRef` instead.
# Objective
- As part of the migration process we need to a) see the end effect of
the migration on user ergonomics b) check for serious perf regressions
c) actually migrate the code
- To accomplish this, I'm going to attempt to migrate all of the
remaining user-facing usages of `LegacyColor` in one PR, being careful
to keep a clean commit history.
- Fixes#12056.
## Solution
I've chosen to use the polymorphic `Color` type as our standard
user-facing API.
- [x] Migrate `bevy_gizmos`.
- [x] Take `impl Into<Color>` in all `bevy_gizmos` APIs
- [x] Migrate sprites
- [x] Migrate UI
- [x] Migrate `ColorMaterial`
- [x] Migrate `MaterialMesh2D`
- [x] Migrate fog
- [x] Migrate lights
- [x] Migrate StandardMaterial
- [x] Migrate wireframes
- [x] Migrate clear color
- [x] Migrate text
- [x] Migrate gltf loader
- [x] Register color types for reflection
- [x] Remove `LegacyColor`
- [x] Make sure CI passes
Incidental improvements to ease migration:
- added `Color::srgba_u8`, `Color::srgba_from_array` and friends
- added `set_alpha`, `is_fully_transparent` and `is_fully_opaque` to the
`Alpha` trait
- add and immediately deprecate (lol) `Color::rgb` and friends in favor
of more explicit and consistent `Color::srgb`
- standardized on white and black for most example text colors
- added vector field traits to `LinearRgba`: ~~`Add`, `Sub`,
`AddAssign`, `SubAssign`,~~ `Mul<f32>` and `Div<f32>`. Multiplications
and divisions do not scale alpha. `Add` and `Sub` have been cut from
this PR.
- added `LinearRgba` and `Srgba` `RED/GREEN/BLUE`
- added `LinearRgba_to_f32_array` and `LinearRgba::to_u32`
## Migration Guide
Bevy's color types have changed! Wherever you used a
`bevy::render::Color`, a `bevy::color::Color` is used instead.
These are quite similar! Both are enums storing a color in a specific
color space (or to be more precise, using a specific color model).
However, each of the different color models now has its own type.
TODO...
- `Color::rgba`, `Color::rgb`, `Color::rbga_u8`, `Color::rgb_u8`,
`Color::rgb_from_array` are now `Color::srgba`, `Color::srgb`,
`Color::srgba_u8`, `Color::srgb_u8` and `Color::srgb_from_array`.
- `Color::set_a` and `Color::a` is now `Color::set_alpha` and
`Color::alpha`. These are part of the `Alpha` trait in `bevy_color`.
- `Color::is_fully_transparent` is now part of the `Alpha` trait in
`bevy_color`
- `Color::r`, `Color::set_r`, `Color::with_r` and the equivalents for
`g`, `b` `h`, `s` and `l` have been removed due to causing silent
relatively expensive conversions. Convert your `Color` into the desired
color space, perform your operations there, and then convert it back
into a polymorphic `Color` enum.
- `Color::hex` is now `Srgba::hex`. Call `.into` or construct a
`Color::Srgba` variant manually to convert it.
- `WireframeMaterial`, `ExtractedUiNode`, `ExtractedDirectionalLight`,
`ExtractedPointLight`, `ExtractedSpotLight` and `ExtractedSprite` now
store a `LinearRgba`, rather than a polymorphic `Color`
- `Color::rgb_linear` and `Color::rgba_linear` are now
`Color::linear_rgb` and `Color::linear_rgba`
- The various CSS color constants are no longer stored directly on
`Color`. Instead, they're defined in the `Srgba` color space, and
accessed via `bevy::color::palettes::css`. Call `.into()` on them to
convert them into a `Color` for quick debugging use, and consider using
the much prettier `tailwind` palette for prototyping.
- The `LIME_GREEN` color has been renamed to `LIMEGREEN` to comply with
the standard naming.
- Vector field arithmetic operations on `Color` (add, subtract, multiply
and divide by a f32) have been removed. Instead, convert your colors
into `LinearRgba` space, and perform your operations explicitly there.
This is particularly relevant when working with emissive or HDR colors,
whose color channel values are routinely outside of the ordinary 0 to 1
range.
- `Color::as_linear_rgba_f32` has been removed. Call
`LinearRgba::to_f32_array` instead, converting if needed.
- `Color::as_linear_rgba_u32` has been removed. Call
`LinearRgba::to_u32` instead, converting if needed.
- Several other color conversion methods to transform LCH or HSL colors
into float arrays or `Vec` types have been removed. Please reimplement
these externally or open a PR to re-add them if you found them
particularly useful.
- Various methods on `Color` such as `rgb` or `hsl` to convert the color
into a specific color space have been removed. Convert into
`LinearRgba`, then to the color space of your choice.
- Various implicitly-converting color value methods on `Color` such as
`r`, `g`, `b` or `h` have been removed. Please convert it into the color
space of your choice, then check these properties.
- `Color` no longer implements `AsBindGroup`. Store a `LinearRgba`
internally instead to avoid conversion costs.
---------
Co-authored-by: Alice Cecile <alice.i.cecil@gmail.com>
Co-authored-by: Afonso Lage <lage.afonso@gmail.com>
Co-authored-by: Rob Parrett <robparrett@gmail.com>
Co-authored-by: Zachary Harrold <zac@harrold.com.au>
# Objective
Split up from #12017, rename Bevy's direction types.
Currently, Bevy has the `Direction2d`, `Direction3d`, and `Direction3dA`
types, which provide a type-level guarantee that their contained vectors
remain normalized. They can be very useful for a lot of APIs for safety,
explicitness, and in some cases performance, as they can sometimes avoid
unnecessary normalizations.
However, many consider them to be inconvenient to use, and opt for
standard vector types like `Vec3` because of this. One reason is that
the direction type names are a bit long and can be annoying to write (of
course you can use autocomplete, but just typing `Vec3` is still nicer),
and in some intances, the extra characters can make formatting worse.
The naming is also inconsistent with Glam's shorter type names, and
results in names like `Direction3dA`, which (in my opinion) are
difficult to read and even a bit ugly.
This PR proposes renaming the types to `Dir2`, `Dir3`, and `Dir3A`.
These names are nice and easy to write, consistent with Glam, and work
well for variants like the SIMD aligned `Dir3A`. As a bonus, it can also
result in nicer formatting in a lot of cases, which can be seen from the
diff of this PR.
Some examples of what it looks like: (copied from #12017)
```rust
// Before
let ray_cast = RayCast2d::new(Vec2::ZERO, Direction2d::X, 5.0);
// After
let ray_cast = RayCast2d::new(Vec2::ZERO, Dir2::X, 5.0);
```
```rust
// Before (an example using Bevy XPBD)
let hit = spatial_query.cast_ray(
Vec3::ZERO,
Direction3d::X,
f32::MAX,
true,
SpatialQueryFilter::default(),
);
// After
let hit = spatial_query.cast_ray(
Vec3::ZERO,
Dir3::X,
f32::MAX,
true,
SpatialQueryFilter::default(),
);
```
```rust
// Before
self.circle(
Vec3::new(0.0, -2.0, 0.0),
Direction3d::Y,
5.0,
Color::TURQUOISE,
);
// After (formatting is collapsed in this case)
self.circle(Vec3::new(0.0, -2.0, 0.0), Dir3::Y, 5.0, Color::TURQUOISE);
```
## Solution
Rename `Direction2d`, `Direction3d`, and `Direction3dA` to `Dir2`,
`Dir3`, and `Dir3A`.
---
## Migration Guide
The `Direction2d` and `Direction3d` types have been renamed to `Dir2`
and `Dir3`.
## Additional Context
This has been brought up on the Discord a few times, and we had a small
[poll](https://discord.com/channels/691052431525675048/1203087353850364004/1212465038711984158)
on this. `Dir2`/`Dir3`/`Dir3A` was quite unanimously chosen as the best
option, but of course it was a very small poll and inconclusive, so
other opinions are certainly welcome too.
---------
Co-authored-by: IceSentry <c.giguere42@gmail.com>
# Objective
`FromWorld` is often used to group loading and creation of assets for
resources.
With this setup, users often end up repetitively calling
`.resource::<AssetServer>` and `.resource_mut::<Assets<T>>`, and may
have difficulties handling lifetimes of the returned references.
## Solution
Add extension methods to `World` to add and load assets, through a new
extension trait defined in `bevy_asset`.
### Other considerations
* This might be a bit too "magic", as it makes implicit the resource
access.
* We could also implement `DirectAssetAccessExt` on `App`, but it didn't
feel necessary, as `FromWorld` is the principal use-case here.
---
## Changelog
* Add the `DirectAssetAccessExt` trait, which adds the `add_asset`,
`load_asset` and `load_asset_with_settings` method to the `World` type.
# Objective
Split up from #12017, add an aligned version of `Direction3d` for SIMD,
and move direction types out of `primitives`.
## Solution
Add `Direction3dA` and move direction types into a new `direction`
module.
---
## Migration Guide
The `Direction2d`, `Direction3d`, and `InvalidDirectionError` types have
been moved out of `bevy::math::primitives`.
Before:
```rust
use bevy::math::primitives::Direction3d;
```
After:
```rust
use bevy::math::Direction3d;
```
---------
Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
# Objective
- The 3D Lighting example is meant to show using multiple lights in the
same scene.
- It currently looks very dark. (See [this
image](4fdb1455d5 (r1494653511)).)
- Resetting the physical camera properties sets the shutter speed to 1 /
125, even though it initially starts at 1 / 100.
## Solution
- Increase the intensity of all 3 lights in the example.
- Now it is much closer to the example in Bevy 0.12.
- I had to remove the comment explaining the lightbulb equivalent of the
intensities because I don't know how it was calculated. Does anyone know
what light emits 100,000 lumens?
- Set the initial shutter speed to 1 / 125.
## Showcase
Before:
<img width="1392" alt="before"
src="https://github.com/bevyengine/bevy/assets/59022059/ac353e02-58e9-4661-aa6d-e5fdf0dcd2f6">
After:
<img width="1392" alt="after"
src="https://github.com/bevyengine/bevy/assets/59022059/4ff0beb6-0ced-4fb2-a953-04be2c77f437">
---------
Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
# Objective
The migration process for `bevy_color` (#12013) will be fairly involved:
there will be hundreds of affected files, and a large number of APIs.
## Solution
To allow us to proceed granularly, we're going to keep both
`bevy_color::Color` (new) and `bevy_render::Color` (old) around until
the migration is complete.
However, simply doing this directly is confusing! They're both called
`Color`, making it very hard to tell when a portion of the code has been
ported.
As discussed in #12056, by renaming the old `Color` type, we can make it
easier to gradually migrate over, one API at a time.
## Migration Guide
THIS MIGRATION GUIDE INTENTIONALLY LEFT BLANK.
This change should not be shipped to end users: delete this section in
the final migration guide!
---------
Co-authored-by: Alice Cecile <alice.i.cecil@gmail.com>
# Objective
- Example reflection_probe is not frame rate independent
## Solution
- Use time delta to rotate the camera, use the same rotation speed as
the load_gltf example
31d7fcd9cb/examples/3d/load_gltf.rs (L63)
---
# Objective
Move Gizmo examples into the separate directory
Fixes#11899
---------
Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
Co-authored-by: Joona Aalto <jondolf.dev@gmail.com>
# Objective
Fixes two small quality issues:
1. With the new default ev100 exposure value, the irradiance intensity
was too low
2. The camera was rotating at a fixed speed (instead of a speed
multiplied by delta time), resulting in frame-rate dependent rotation
speed.
# Objective
Fixes#11846
## Solution
Add a `synchronous_pipeline_compilation ` field to `RenderPlugin`,
defaulting to `false`.
Most of the diff is whitespace.
## Changelog
Added `synchronous_pipeline_compilation ` to `RenderPlugin` for
disabling async pipeline creation.
## Migration Guide
TODO: consider combining this with the guide for #11846
`RenderPlugin` has a new `synchronous_pipeline_compilation ` property.
The default value is `false`. Set this to `true` if you want to retain
the previous synchronous behavior.
---------
Co-authored-by: JMS55 <47158642+JMS55@users.noreply.github.com>
Co-authored-by: François <mockersf@gmail.com>
# Objective
After adding configurable exposure, we set the default ev100 value to
`7` (indoor). This brought us out of sync with Blender's configuration
and defaults. This PR changes the default to `9.7` (bright indoor or
very overcast outdoors), as I calibrated in #11577. This feels like a
very reasonable default.
The other changes generally center around tweaking Bevy's lighting
defaults and examples to play nicely with this number, alongside a few
other tweaks and improvements.
Note that for artistic reasons I have reverted some examples, which
changed to directional lights in #11581, back to point lights.
Fixes#11577
---
## Changelog
- Changed `Exposure::ev100` from `7` to `9.7` to better match Blender
- Renamed `ExposureSettings` to `Exposure`
- `Camera3dBundle` now includes `Exposure` for discoverability
- Bumped `FULL_DAYLIGHT ` and `DIRECT_SUNLIGHT` to represent the
middle-to-top of those ranges instead of near the bottom
- Added new `AMBIENT_DAYLIGHT` constant and set that as the new
`DirectionalLight` default illuminance.
- `PointLight` and `SpotLight` now have a default `intensity` of
1,000,000 lumens. This makes them actually useful in the context of the
new "semi-outdoor" exposure and puts them in the "cinema lighting"
category instead of the "common household light" category. They are also
reasonably close to the Blender default.
- `AmbientLight` default has been bumped from `20` to `80`.
## Migration Guide
- The increased `Exposure::ev100` means that all existing 3D lighting
will need to be adjusted to match (DirectionalLights, PointLights,
SpotLights, EnvironmentMapLights, etc). Or alternatively, you can adjust
the `Exposure::ev100` on your cameras to work nicely with your current
lighting values. If you are currently relying on default intensity
values, you might need to change the intensity to achieve the same
effect. Note that in Bevy 0.12, point/spot lights had a different hard
coded ev100 value than directional lights. In Bevy 0.13, they use the
same ev100, so if you have both in your scene, the _scale_ between these
light types has changed and you will likely need to adjust one or both
of them.
# Objective
Fix https://github.com/bevyengine/bevy/issues/11577.
## Solution
Fix the examples, add a few constants to make setting light values
easier, and change the default lighting settings to be more realistic.
(Now designed for an overcast day instead of an indoor environment)
---
I did not include any example-related changes in here.
## Changelogs (not including breaking changes)
### bevy_pbr
- Added `light_consts` module (included in prelude), which contains
common lux and lumen values for lights.
- Added `AmbientLight::NONE` constant, which is an ambient light with a
brightness of 0.
- Added non-EV100 variants for `ExposureSettings`'s EV100 constants,
which allow easier construction of an `ExposureSettings` from a EV100
constant.
## Breaking changes
### bevy_pbr
The several default lighting values were changed:
- `PointLight`'s default `intensity` is now `2000.0`
- `SpotLight`'s default `intensity` is now `2000.0`
- `DirectionalLight`'s default `illuminance` is now
`light_consts::lux::OVERCAST_DAY` (`1000.`)
- `AmbientLight`'s default `brightness` is now `20.0`
I just implemented this to record a video for the new blog post, but I
figured it would also make a good dedicated example. This also allows us
to remove a lot of code from the 2d/3d gizmo examples since it
supersedes this portion of code.
Depends on: https://github.com/bevyengine/bevy/pull/11699
# Objective
#11431 and #11688 implemented meshing support for Bevy's new geometric
primitives. The next step is to deprecate the shapes in
`bevy_render::mesh::shape` and to later remove them completely for 0.14.
## Solution
Deprecate the shapes and reduce code duplication by utilizing the
primitive meshing API for the old shapes where possible.
Note that some shapes have behavior that can't be exactly reproduced
with the new primitives yet:
- `Box` is more of an AABB with min/max extents
- `Plane` supports a subdivision count
- `Quad` has a `flipped` property
These types have not been changed to utilize the new primitives yet.
---
## Changelog
- Deprecated all shapes in `bevy_render::mesh::shape`
- Changed all examples to use new primitives for meshing
## Migration Guide
Bevy has previously used rendering-specific types like `UVSphere` and
`Quad` for primitive mesh shapes. These have now been deprecated to use
the geometric primitives newly introduced in version 0.13.
Some examples:
```rust
let before = meshes.add(shape::Box::new(5.0, 0.15, 5.0));
let after = meshes.add(Cuboid::new(5.0, 0.15, 5.0));
let before = meshes.add(shape::Quad::default());
let after = meshes.add(Rectangle::default());
let before = meshes.add(shape::Plane::from_size(5.0));
// The surface normal can now also be specified when using `new`
let after = meshes.add(Plane3d::default().mesh().size(5.0, 5.0));
let before = meshes.add(
Mesh::try_from(shape::Icosphere {
radius: 0.5,
subdivisions: 5,
})
.unwrap(),
);
let after = meshes.add(Sphere::new(0.5).mesh().ico(5).unwrap());
```
# Objective
- Fixes#11740
## Solution
- Turned `Mesh::set_indices` into `Mesh::insert_indices` and added
related methods for completeness.
---
## Changelog
- Replaced `Mesh::set_indices(indices: Option<Indices>)` with
`Mesh::insert_indices(indices: Indices)`
- Replaced `Mesh::with_indices(indices: Option<Indices>)` with
`Mesh::with_inserted_indices(indices: Indices)` and
`Mesh::with_removed_indices()` mirroring the API for inserting /
removing attributes.
- Updated the examples and internal uses of the APIs described above.
## Migration Guide
- Use `Mesh::insert_indices` or `Mesh::with_inserted_indices` instead of
`Mesh::set_indices` / `Mesh::with_indices`.
- If you have passed `None` to `Mesh::set_indices` or
`Mesh::with_indices` you should use `Mesh::remove_indices` or
`Mesh::with_removed_indices` instead.
---------
Co-authored-by: François <mockersf@gmail.com>
# Objective
Bevy could benefit from *irradiance volumes*, also known as *voxel
global illumination* or simply as light probes (though this term is not
preferred, as multiple techniques can be called light probes).
Irradiance volumes are a form of baked global illumination; they work by
sampling the light at the centers of each voxel within a cuboid. At
runtime, the voxels surrounding the fragment center are sampled and
interpolated to produce indirect diffuse illumination.
## Solution
This is divided into two sections. The first is copied and pasted from
the irradiance volume module documentation and describes the technique.
The second part consists of notes on the implementation.
### Overview
An *irradiance volume* is a cuboid voxel region consisting of
regularly-spaced precomputed samples of diffuse indirect light. They're
ideal if you have a dynamic object such as a character that can move
about
static non-moving geometry such as a level in a game, and you want that
dynamic object to be affected by the light bouncing off that static
geometry.
To use irradiance volumes, you need to precompute, or *bake*, the
indirect
light in your scene. Bevy doesn't currently come with a way to do this.
Fortunately, [Blender] provides a [baking tool] as part of the Eevee
renderer, and its irradiance volumes are compatible with those used by
Bevy.
The [`bevy-baked-gi`] project provides a tool, `export-blender-gi`, that
can
extract the baked irradiance volumes from the Blender `.blend` file and
package them up into a `.ktx2` texture for use by the engine. See the
documentation in the `bevy-baked-gi` project for more details as to this
workflow.
Like all light probes in Bevy, irradiance volumes are 1×1×1 cubes that
can
be arbitrarily scaled, rotated, and positioned in a scene with the
[`bevy_transform::components::Transform`] component. The 3D voxel grid
will
be stretched to fill the interior of the cube, and the illumination from
the
irradiance volume will apply to all fragments within that bounding
region.
Bevy's irradiance volumes are based on Valve's [*ambient cubes*] as used
in
*Half-Life 2* ([Mitchell 2006], slide 27). These encode a single color
of
light from the six 3D cardinal directions and blend the sides together
according to the surface normal.
The primary reason for choosing ambient cubes is to match Blender, so
that
its Eevee renderer can be used for baking. However, they also have some
advantages over the common second-order spherical harmonics approach:
ambient cubes don't suffer from ringing artifacts, they are smaller (6
colors for ambient cubes as opposed to 9 for spherical harmonics), and
evaluation is faster. A smaller basis allows for a denser grid of voxels
with the same storage requirements.
If you wish to use a tool other than `export-blender-gi` to produce the
irradiance volumes, you'll need to pack the irradiance volumes in the
following format. The irradiance volume of resolution *(Rx, Ry, Rz)* is
expected to be a 3D texture of dimensions *(Rx, 2Ry, 3Rz)*. The
unnormalized
texture coordinate *(s, t, p)* of the voxel at coordinate *(x, y, z)*
with
side *S* ∈ *{-X, +X, -Y, +Y, -Z, +Z}* is as follows:
```text
s = x
t = y + ⎰ 0 if S ∈ {-X, -Y, -Z}
⎱ Ry if S ∈ {+X, +Y, +Z}
⎧ 0 if S ∈ {-X, +X}
p = z + ⎨ Rz if S ∈ {-Y, +Y}
⎩ 2Rz if S ∈ {-Z, +Z}
```
Visually, in a left-handed coordinate system with Y up, viewed from the
right, the 3D texture looks like a stacked series of voxel grids, one
for
each cube side, in this order:
| **+X** | **+Y** | **+Z** |
| ------ | ------ | ------ |
| **-X** | **-Y** | **-Z** |
A terminology note: Other engines may refer to irradiance volumes as
*voxel
global illumination*, *VXGI*, or simply as *light probes*. Sometimes
*light
probe* refers to what Bevy calls a reflection probe. In Bevy, *light
probe*
is a generic term that encompasses all cuboid bounding regions that
capture
indirect illumination, whether based on voxels or not.
Note that, if binding arrays aren't supported (e.g. on WebGPU or WebGL
2),
then only the closest irradiance volume to the view will be taken into
account during rendering.
[*ambient cubes*]:
https://advances.realtimerendering.com/s2006/Mitchell-ShadingInValvesSourceEngine.pdf
[Mitchell 2006]:
https://advances.realtimerendering.com/s2006/Mitchell-ShadingInValvesSourceEngine.pdf
[Blender]: http://blender.org/
[baking tool]:
https://docs.blender.org/manual/en/latest/render/eevee/render_settings/indirect_lighting.html
[`bevy-baked-gi`]: https://github.com/pcwalton/bevy-baked-gi
### Implementation notes
This patch generalizes light probes so as to reuse as much code as
possible between irradiance volumes and the existing reflection probes.
This approach was chosen because both techniques share numerous
similarities:
1. Both irradiance volumes and reflection probes are cuboid bounding
regions.
2. Both are responsible for providing baked indirect light.
3. Both techniques involve presenting a variable number of textures to
the shader from which indirect light is sampled. (In the current
implementation, this uses binding arrays.)
4. Both irradiance volumes and reflection probes require gathering and
sorting probes by distance on CPU.
5. Both techniques require the GPU to search through a list of bounding
regions.
6. Both will eventually want to have falloff so that we can smoothly
blend as objects enter and exit the probes' influence ranges. (This is
not implemented yet to keep this patch relatively small and reviewable.)
To do this, we generalize most of the methods in the reflection probes
patch #11366 to be generic over a trait, `LightProbeComponent`. This
trait is implemented by both `EnvironmentMapLight` (for reflection
probes) and `IrradianceVolume` (for irradiance volumes). Using a trait
will allow us to add more types of light probes in the future. In
particular, I highly suspect we will want real-time reflection planes
for mirrors in the future, which can be easily slotted into this
framework.
## Changelog
> This section is optional. If this was a trivial fix, or has no
externally-visible impact, you can delete this section.
### Added
* A new `IrradianceVolume` asset type is available for baked voxelized
light probes. You can bake the global illumination using Blender or
another tool of your choice and use it in Bevy to apply indirect
illumination to dynamic objects.
# Objective
Split up from #11007, fixing most of the remaining work for #10569.
Implement `Meshable` for `Cuboid`, `Sphere`, `Cylinder`, `Capsule`,
`Torus`, and `Plane3d`. This covers all shapes that Bevy has mesh
structs for in `bevy_render::mesh::shapes`.
`Cone` and `ConicalFrustum` are new shapes, so I can add them in a
follow-up, or I could just add them here directly if that's preferrable.
## Solution
Implement `Meshable` for `Cuboid`, `Sphere`, `Cylinder`, `Capsule`,
`Torus`, and `Plane3d`.
The logic is mostly just a copy of the the existing `bevy_render`
shapes, but `Plane3d` has a configurable surface normal that affects the
orientation. Some property names have also been changed to be more
consistent.
The default values differ from the old shapes to make them a bit more
logical:
- Spheres now have a radius of 0.5 instead of 1.0. The default capsule
is equivalent to the default cylinder with the sphere's halves glued on.
- The inner and outer radius of the torus are now 0.5 and 1.0 instead of
0.5 and 1.5 (i.e. the new minor and major radii are 0.25 and 0.75). It's
double the width of the default cuboid, half of its height, and the
default sphere matches the size of the hole.
- `Cuboid` is 1x1x1 by default unlike the dreaded `Box` which is 2x1x1.
Before, with "old" shapes:
![old](https://github.com/bevyengine/bevy/assets/57632562/733f3dda-258c-4491-8152-9829e056a1a3)
Now, with primitive meshing:
![new](https://github.com/bevyengine/bevy/assets/57632562/5a1af14f-bb98-401d-82cf-de8072fea4ec)
I only changed the `3d_shapes` example to use primitives for now. I can
change them all in this PR or a follow-up though, whichever way is
preferrable.
### Sphere API
Spheres have had separate `Icosphere` and `UVSphere` structs, but with
primitives we only have one `Sphere`.
We need to handle this with builders:
```rust
// Existing structs
let ico = Mesh::try_from(Icophere::default()).unwrap();
let uv = Mesh::from(UVSphere::default());
// Primitives
let ico = Sphere::default().mesh().ico(5).unwrap();
let uv = Sphere::default().mesh().uv(32, 18);
```
We could add methods on `Sphere` directly to skip calling `.mesh()`.
I also added a `SphereKind` enum that can be used with the `kind`
method:
```rust
let ico = Sphere::default()
.mesh()
.kind(SphereKind::Ico { subdivisions: 8 })
.build();
```
The default mesh for a `Sphere` is an icosphere with 5 subdivisions
(like the default `Icosphere`).
---
## Changelog
- Implement `Meshable` and `Default` for `Cuboid`, `Sphere`, `Cylinder`,
`Capsule`, `Torus`, and `Plane3d`
- Use primitives in `3d_shapes` example
---------
Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
# Objective
Currently the `missing_docs` lint is allowed-by-default and enabled at
crate level when their documentations is complete (see #3492).
This PR proposes to inverse this logic by making `missing_docs`
warn-by-default and mark crates with imcomplete docs allowed.
## Solution
Makes `missing_docs` warn at workspace level and allowed at crate level
when the docs is imcomplete.
The PR is in a reviewable state now in the sense that the basic
implementations are there. There are still some ToDos that I'm aware of:
- [x] docs for all the new structs and traits
- [x] implement `Default` and derive other useful traits for the new
structs
- [x] Take a look at the notes again (Do this after a first round of
reviews)
- [x] Take care of the repetition in the circle drawing functions
---
# Objective
- TLDR: This PR enables us to quickly draw all the newly added
primitives from `bevy_math` in immediate mode with gizmos
- Addresses #10571
## Solution
- This implements the first design idea I had that covered everything
that was mentioned in the Issue
https://github.com/bevyengine/bevy/issues/10571#issuecomment-1863646197
---
## Caveats
- I added the `Primitive(2/3)d` impls for `Direction(2/3)d` to make them
work with the current solution. We could impose less strict requirements
for the gizmoable objects and remove the impls afterwards if the
community doesn't like the current approach.
---
## Changelog
- implement capabilities to draw ellipses on the gizmo in general (this
was required to have some code which is able to draw the ellipse
primitive)
- refactored circle drawing code to use the more general ellipse drawing
code to keep code duplication low
- implement `Primitive2d` for `Direction2d` and impl `Primitive3d` for
`Direction3d`
- implement trait to draw primitives with specialized details with
gizmos
- `GizmoPrimitive2d` for all the 2D primitives
- `GizmoPrimitive3d` for all the 3D primitives
- (question while writing this: Does it actually matter if we split this
in 2D and 3D? I guess it could be useful in the future if we do
something based on the main rendering mode even though atm it's kinda
useless)
---
---------
Co-authored-by: nothendev <borodinov.ilya@gmail.com>
# Objective
Drawing a `Gizmos::circle` whose normal is derived from a Transform's
local axes now requires converting a Vec3 to a Direction3d and
unwrapping the result, and I think we shold move the conversion into
Bevy.
## Solution
We can make
`Transform::{left,right,up,down,forward,back,local_x,local_y,local_z}`
return a Direction3d, because they know that their results will be of
finite non-zero length (roughly 1.0).
---
## Changelog
`Transform::up()` and similar functions now return `Direction3d` instead
of `Vec3`.
## Migration Guide
Callers of `Transform::up()` and similar functions may have to
dereference the returned `Direction3d` to get to the inner `Vec3`.
---------
Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
Co-authored-by: Joona Aalto <jondolf.dev@gmail.com>
# Objective
Right now, all assets in the main world get extracted and prepared in
the render world (if the asset's using the RenderAssetPlugin). This is
unfortunate for two cases:
1. **TextureAtlas** / **FontAtlas**: This one's huge. The individual
`Image` assets that make up the atlas are cloned and prepared
individually when there's no reason for them to be. The atlas textures
are built on the CPU in the main world. *There can be hundreds of images
that get prepared for rendering only not to be used.*
2. If one loads an Image and needs to transform it in a system before
rendering it, kind of like the [decompression
example](https://github.com/bevyengine/bevy/blob/main/examples/asset/asset_decompression.rs#L120),
there's a price paid for extracting & preparing the asset that's not
intended to be rendered yet.
------
* References #10520
* References #1782
## Solution
This changes the `RenderAssetPersistencePolicy` enum to bitflags. I felt
that the objective with the parameter is so similar in nature to wgpu's
[`TextureUsages`](https://docs.rs/wgpu/latest/wgpu/struct.TextureUsages.html)
and
[`BufferUsages`](https://docs.rs/wgpu/latest/wgpu/struct.BufferUsages.html),
that it may as well be just like that.
```rust
// This asset only needs to be in the main world. Don't extract and prepare it.
RenderAssetUsages::MAIN_WORLD
// Keep this asset in the main world and
RenderAssetUsages::MAIN_WORLD | RenderAssetUsages::RENDER_WORLD
// This asset is only needed in the render world. Remove it from the asset server once extracted.
RenderAssetUsages::RENDER_WORLD
```
### Alternate Solution
I considered introducing a third field to `RenderAssetPersistencePolicy`
enum:
```rust
enum RenderAssetPersistencePolicy {
/// Keep the asset in the main world after extracting to the render world.
Keep,
/// Remove the asset from the main world after extracting to the render world.
Unload,
/// This doesn't need to be in the render world at all.
NoExtract, // <-----
}
```
Functional, but this seemed like shoehorning. Another option is renaming
the enum to something like:
```rust
enum RenderAssetExtractionPolicy {
/// Extract the asset and keep it in the main world.
Extract,
/// Remove the asset from the main world after extracting to the render world.
ExtractAndUnload,
/// This doesn't need to be in the render world at all.
NoExtract,
}
```
I think this last one could be a good option if the bitflags are too
clunky.
## Migration Guide
* `RenderAssetPersistencePolicy::Keep` → `RenderAssetUsage::MAIN_WORLD |
RenderAssetUsage::RENDER_WORLD` (or `RenderAssetUsage::default()`)
* `RenderAssetPersistencePolicy::Unload` →
`RenderAssetUsage::RENDER_WORLD`
* For types implementing the `RenderAsset` trait, change `fn
persistence_policy(&self) -> RenderAssetPersistencePolicy` to `fn
asset_usage(&self) -> RenderAssetUsages`.
* Change any references to `cpu_persistent_access`
(`RenderAssetPersistencePolicy`) to `asset_usage` (`RenderAssetUsage`).
This applies to `Image`, `Mesh`, and a few other types.
# Objective
- example `animated_material` is more complex that needed to show how to
animate materials
- it makes CI crash because it uses too much memory
## Solution
- Simplify the example
# Objective
- Implement an arc3d API for gizmos
- Solves #11536
## Solution
### `arc_3d`
- The current `arc3d` method on gizmos only takes an angle
- It draws an "standard arc" by default, this is an arc starting at
`Vec3::X`, in the XZ plane, in counter clockwise direction with a normal
that is facing up
- The "standard arc" can be customized with the usual gizmo builder
pattern. This way you'll be able to draw arbitrary arcs
### `short/long_arc_3d_between`
- Given `center`, `from`, `to` draws an arc between `from` and `to`
---
## Changelog
> This section is optional. If this was a trivial fix, or has no
externally-visible impact, you can delete this section.
- Added: `Gizmos::arc3d(&mut self, angle)` method
- Added: `Gizmos::long_arc_3d_between(&mut self, center, from, to)`
method
- Added: `Gizmos::short_arc_3d_between(&mut self, center, from, to)`
method
---
This PR factors out an orthogonal part of another PR as mentioned in
[this
comment](https://github.com/bevyengine/bevy/pull/11072#issuecomment-1883859573)
# Objective
Alternative to #11515.
Fixes change detection being triggered on the "image viewer image
material" every frame.
## Solution
- Split the megasystem into two separate systems: one to handle drop
events, and one to handle asset change events.
- Move the event reader iteration "outside." so that we're only doing
stuff when there are events.
- Flatten some of the more extreme nesting
- Other bits of cleanup, removing an unnecessary clone and unused
variable.
I think these systems can even run in parallel now, not that it
particularly matters.
# Objective
- Fixes#11516
## Solution
- Add Animated Material example (colors are hue-cycling smoothly
per-mesh)
![image](https://github.com/bevyengine/bevy/assets/11307157/c75b9e66-0019-41b8-85ec-647559c6ba01)
Note: this example reproduces the perf issue found in #10610 pretty
consistently, with and without the changes from that PR included. Frame
time is sometimes around 4.3ms, other times around 12-14ms. Its pretty
random per run. I think this clears #10610 for merge.
# Objective
- Prep for https://github.com/bevyengine/bevy/pull/10164
- Make deferred_lighting_pass_id a ColorAttachment
- Correctly extract shadow view frusta so that the view uniforms get
populated
- Make some needed things public
- Misc formatting
# Objective
Fix weird visuals when drawing a gizmo with a non-normed normal.
Fixes#11401
## Solution
Just normalize right before we draw. Could do it when constructing the
builder but that seems less consistent.
## Changelog
- gizmos.circle normal is now a Direction3d instead of a Vec3.
## Migration Guide
- Pass a Direction3d for gizmos.circle normal, eg.
`Direction3d::new(vec).unwrap_or(default)` or potentially
`Direction3d::new_unchecked(vec)` if you know your vec is definitely
normalized.
# Objective
Currently, the `primitives` module is inside of the prelude for
`bevy_math`, but the actual primitives are not. This requires either
importing the shapes everywhere that uses them, or adding the
`primitives::` prefix:
```rust
let rectangle = meshes.add(primitives::Rectangle::new(5.0, 2.5));
```
(Note: meshing isn't actually implemented yet, but it's in #11431)
The primitives are meant to be used for a variety of tasks across
several crates, like for meshing, bounding volumes, gizmos, colliders,
and so on, so I think having them in the prelude is justified. It would
make several common tasks a lot more ergonomic.
```rust
let rectangle = meshes.add(Rectangle::new(5.0, 2.5));
```
## Solution
Add `primitives::*` to `bevy_math::prelude`.
---------
Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
This pull request re-submits #10057, which was backed out for breaking
macOS, iOS, and Android. I've tested this version on macOS and Android
and on the iOS simulator.
# Objective
This pull request implements *reflection probes*, which generalize
environment maps to allow for multiple environment maps in the same
scene, each of which has an axis-aligned bounding box. This is a
standard feature of physically-based renderers and was inspired by [the
corresponding feature in Blender's Eevee renderer].
## Solution
This is a minimal implementation of reflection probes that allows
artists to define cuboid bounding regions associated with environment
maps. For every view, on every frame, a system builds up a list of the
nearest 4 reflection probes that are within the view's frustum and
supplies that list to the shader. The PBR fragment shader searches
through the list, finds the first containing reflection probe, and uses
it for indirect lighting, falling back to the view's environment map if
none is found. Both forward and deferred renderers are fully supported.
A reflection probe is an entity with a pair of components, *LightProbe*
and *EnvironmentMapLight* (as well as the standard *SpatialBundle*, to
position it in the world). The *LightProbe* component (along with the
*Transform*) defines the bounding region, while the
*EnvironmentMapLight* component specifies the associated diffuse and
specular cubemaps.
A frequent question is "why two components instead of just one?" The
advantages of this setup are:
1. It's readily extensible to other types of light probes, in particular
*irradiance volumes* (also known as ambient cubes or voxel global
illumination), which use the same approach of bounding cuboids. With a
single component that applies to both reflection probes and irradiance
volumes, we can share the logic that implements falloff and blending
between multiple light probes between both of those features.
2. It reduces duplication between the existing *EnvironmentMapLight* and
these new reflection probes. Systems can treat environment maps attached
to cameras the same way they treat environment maps applied to
reflection probes if they wish.
Internally, we gather up all environment maps in the scene and place
them in a cubemap array. At present, this means that all environment
maps must have the same size, mipmap count, and texture format. A
warning is emitted if this restriction is violated. We could potentially
relax this in the future as part of the automatic mipmap generation
work, which could easily do texture format conversion as part of its
preprocessing.
An easy way to generate reflection probe cubemaps is to bake them in
Blender and use the `export-blender-gi` tool that's part of the
[`bevy-baked-gi`] project. This tool takes a `.blend` file containing
baked cubemaps as input and exports cubemap images, pre-filtered with an
embedded fork of the [glTF IBL Sampler], alongside a corresponding
`.scn.ron` file that the scene spawner can use to recreate the
reflection probes.
Note that this is intentionally a minimal implementation, to aid
reviewability. Known issues are:
* Reflection probes are basically unsupported on WebGL 2, because WebGL
2 has no cubemap arrays. (Strictly speaking, you can have precisely one
reflection probe in the scene if you have no other cubemaps anywhere,
but this isn't very useful.)
* Reflection probes have no falloff, so reflections will abruptly change
when objects move from one bounding region to another.
* As mentioned before, all cubemaps in the world of a given type
(diffuse or specular) must have the same size, format, and mipmap count.
Future work includes:
* Blending between multiple reflection probes.
* A falloff/fade-out region so that reflected objects disappear
gradually instead of vanishing all at once.
* Irradiance volumes for voxel-based global illumination. This should
reuse much of the reflection probe logic, as they're both GI techniques
based on cuboid bounding regions.
* Support for WebGL 2, by breaking batches when reflection probes are
used.
These issues notwithstanding, I think it's best to land this with
roughly the current set of functionality, because this patch is useful
as is and adding everything above would make the pull request
significantly larger and harder to review.
---
## Changelog
### Added
* A new *LightProbe* component is available that specifies a bounding
region that an *EnvironmentMapLight* applies to. The combination of a
*LightProbe* and an *EnvironmentMapLight* offers *reflection probe*
functionality similar to that available in other engines.
[the corresponding feature in Blender's Eevee renderer]:
https://docs.blender.org/manual/en/latest/render/eevee/light_probes/reflection_cubemaps.html
[`bevy-baked-gi`]: https://github.com/pcwalton/bevy-baked-gi
[glTF IBL Sampler]: https://github.com/KhronosGroup/glTF-IBL-Sampler
# Objective
- Example `deterministic` crashes on CI on Windows because it uses too
much memory
## Solution
- Reduce the number of planes displayed while still having the issue
- While there, add a small margin to the text so that it's prettier
# Objective
This PR aims to implement multiple configs for gizmos as discussed in
#9187.
## Solution
Configs for the new `GizmoConfigGroup`s are stored in a
`GizmoConfigStore` resource and can be accesses using a type based key
or iterated over. This type based key doubles as a standardized location
where plugin authors can put their own configuration not covered by the
standard `GizmoConfig` struct. For example the `AabbGizmoGroup` has a
default color and toggle to show all AABBs. New configs can be
registered using `app.init_gizmo_group::<T>()` during startup.
When requesting the `Gizmos<T>` system parameter the generic type
determines which config is used. The config structs are available
through the `Gizmos` system parameter allowing for easy access while
drawing your gizmos.
Internally, resources and systems used for rendering (up to an including
the extract system) are generic over the type based key and inserted on
registering a new config.
## Alternatives
The configs could be stored as components on entities with markers which
would make better use of the ECS. I also implemented this approach
([here](https://github.com/jeliag/bevy/tree/gizmo-multiconf-comp)) and
believe that the ergonomic benefits of a central config store outweigh
the decreased use of the ECS.
## Unsafe Code
Implementing system parameter by hand is unsafe but seems to be required
to access the config store once and not on every gizmo draw function
call. This is critical for performance. ~Is there a better way to do
this?~
## Future Work
New gizmos (such as #10038, and ideas from #9400) will require custom
configuration structs. Should there be a new custom config for every
gizmo type, or should we group them together in a common configuration?
(for example `EditorGizmoConfig`, or something more fine-grained)
## Changelog
- Added `GizmoConfigStore` resource and `GizmoConfigGroup` trait
- Added `init_gizmo_group` to `App`
- Added early returns to gizmo drawing increasing performance when
gizmos are disabled
- Changed `GizmoConfig` and aabb gizmos to use new `GizmoConfigStore`
- Changed `Gizmos` system parameter to use type based key to retrieve
config
- Changed resources and systems used for gizmo rendering to be generic
over type based key
- Changed examples (3d_gizmos, 2d_gizmos) to showcase new API
## Migration Guide
- `GizmoConfig` is no longer a resource and has to be accessed through
`GizmoConfigStore` resource. The default config group is
`DefaultGizmoGroup`, but consider using your own custom config group if
applicable.
---------
Co-authored-by: Nicola Papale <nicopap@users.noreply.github.com>
Rebased and finished version of
https://github.com/bevyengine/bevy/pull/8407. Huge thanks to @GitGhillie
for adjusting all the examples, and the many other people who helped
write this PR (@superdump , @coreh , among others) :)
Fixes https://github.com/bevyengine/bevy/issues/8369
---
## Changelog
- Added a `brightness` control to `Skybox`.
- Added an `intensity` control to `EnvironmentMapLight`.
- Added `ExposureSettings` and `PhysicalCameraParameters` for
controlling exposure of 3D cameras.
- Removed the baked-in `DirectionalLight` exposure Bevy previously
hardcoded internally.
## Migration Guide
- If using a `Skybox` or `EnvironmentMapLight`, use the new `brightness`
and `intensity` controls to adjust their strength.
- All 3D scene will now have different apparent brightnesses due to Bevy
implementing proper exposure controls. You will have to adjust the
intensity of your lights and/or your camera exposure via the new
`ExposureSettings` component to compensate.
---------
Co-authored-by: Robert Swain <robert.swain@gmail.com>
Co-authored-by: GitGhillie <jillisnoordhoek@gmail.com>
Co-authored-by: Marco Buono <thecoreh@gmail.com>
Co-authored-by: vero <email@atlasdostal.com>
Co-authored-by: atlas dostal <rodol@rivalrebels.com>
# Objective
Add support for presenting each UI tree on a specific window and
viewport, while making as few breaking changes as possible.
This PR is meant to resolve the following issues at once, since they're
all related.
- Fixes#5622
- Fixes#5570
- Fixes#5621
Adopted #5892 , but started over since the current codebase diverged
significantly from the original PR branch. Also, I made a decision to
propagate component to children instead of recursively iterating over
nodes in search for the root.
## Solution
Add a new optional component that can be inserted to UI root nodes and
propagate to children to specify which camera it should render onto.
This is then used to get the render target and the viewport for that UI
tree. Since this component is optional, the default behavior should be
to render onto the single camera (if only one exist) and warn of
ambiguity if multiple cameras exist. This reduces the complexity for
users with just one camera, while giving control in contexts where it
matters.
## Changelog
- Adds `TargetCamera(Entity)` component to specify which camera should a
node tree be rendered into. If only one camera exists, this component is
optional.
- Adds an example of rendering UI to a texture and using it as a
material in a 3D world.
- Fixes recalculation of physical viewport size when target scale factor
changes. This can happen when the window is moved between displays with
different DPI.
- Changes examples to demonstrate assigning UI to different viewports
and windows and make interactions in an offset viewport testable.
- Removes `UiCameraConfig`. UI visibility now can be controlled via
combination of explicit `TargetCamera` and `Visibility` on the root
nodes.
---------
Co-authored-by: davier <bricedavier@gmail.com>
Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
Co-authored-by: Alice Cecile <alice.i.cecil@gmail.com>
# Objective
Unify flycam-style camera controller from the examples.
`parallax_mapping` controller was kept as is.
## Solution
Fixed some mouse movement & cursor grabbing related issues too.
# Objective
Issue #10243: rendering multiple triangles in the same place results in
flickering.
## Solution
Considered these alternatives:
- `depth_bias` may not work, because of high number of entities, so
creating a material per entity is practically not possible
- rendering at slightly different positions does not work, because when
camera is far, float rounding causes the same issues (edit: assuming we
have to use the same `depth_bias`)
- considered implementing deterministic operation like
`query.par_iter().flat_map(...).collect()` to be used in
`check_visibility` system (which would solve the issue since query is
deterministic), and could not figure out how to make it as cheap as
current approach with thread-local collectors (#11249)
So adding an option to sort entities after `check_visibility` system
run.
Should not be too bad, because after visibility check, only a handful
entities remain.
This is probably not the only source of non-determinism in Bevy, but
this is one I could find so far. At least it fixes the repro example.
## Changelog
- `DeterministicRenderingConfig` option to enable deterministic
rendering
## Test
<img width="1392" alt="image"
src="https://github.com/bevyengine/bevy/assets/28969/c735bce1-3a71-44cd-8677-c19f6c0ee6bd">
---------
Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
Update to `glam` 0.25, `encase` 0.7 and `hexasphere` to 10.0
## Changelog
Added the `FloatExt` trait to the `bevy_math` prelude which adds `lerp`,
`inverse_lerp` and `remap` methods to the `f32` and `f64` types.
# Motivation
When spawning entities into a scene, it is very common to create assets
like meshes and materials and to add them via asset handles. A common
setup might look like this:
```rust
fn setup(
mut commands: Commands,
mut meshes: ResMut<Assets<Mesh>>,
mut materials: ResMut<Assets<StandardMaterial>>,
) {
commands.spawn(PbrBundle {
mesh: meshes.add(Mesh::from(shape::Cube { size: 1.0 })),
material: materials.add(StandardMaterial::from(Color::RED)),
..default()
});
}
```
Let's take a closer look at the part that adds the assets using `add`.
```rust
mesh: meshes.add(Mesh::from(shape::Cube { size: 1.0 })),
material: materials.add(StandardMaterial::from(Color::RED)),
```
Here, "mesh" and "material" are both repeated three times. It's very
explicit, but I find it to be a bit verbose. In addition to being more
code to read and write, the extra characters can sometimes also lead to
the code being formatted to span multiple lines even though the core
task, adding e.g. a primitive mesh, is extremely simple.
A way to address this is by using `.into()`:
```rust
mesh: meshes.add(shape::Cube { size: 1.0 }.into()),
material: materials.add(Color::RED.into()),
```
This is fine, but from the names and the type of `meshes`, we already
know what the type should be. It's very clear that `Cube` should be
turned into a `Mesh` because of the context it's used in. `.into()` is
just seven characters, but it's so common that it quickly adds up and
gets annoying.
It would be nice if you could skip all of the conversion and let Bevy
handle it for you:
```rust
mesh: meshes.add(shape::Cube { size: 1.0 }),
material: materials.add(Color::RED),
```
# Objective
Make adding assets more ergonomic by making `Assets::add` take an `impl
Into<A>` instead of `A`.
## Solution
`Assets::add` now takes an `impl Into<A>` instead of `A`, so e.g. this
works:
```rust
commands.spawn(PbrBundle {
mesh: meshes.add(shape::Cube { size: 1.0 }),
material: materials.add(Color::RED),
..default()
});
```
I also changed all examples to use this API, which increases consistency
as well because `Mesh::from` and `into` were being used arbitrarily even
in the same file. This also gets rid of some lines of code because
formatting is nicer.
---
## Changelog
- `Assets::add` now takes an `impl Into<A>` instead of `A`
- Examples don't use `T::from(K)` or `K.into()` when adding assets
## Migration Guide
Some `into` calls that worked previously might now be broken because of
the new trait bounds. You need to either remove `into` or perform the
conversion explicitly with `from`:
```rust
// Doesn't compile
let mesh_handle = meshes.add(shape::Cube { size: 1.0 }.into()),
// These compile
let mesh_handle = meshes.add(shape::Cube { size: 1.0 }),
let mesh_handle = meshes.add(Mesh::from(shape::Cube { size: 1.0 })),
```
## Concerns
I believe the primary concerns might be:
1. Is this too implicit?
2. Does this increase codegen bloat?
Previously, the two APIs were using `into` or `from`, and now it's
"nothing" or `from`. You could argue that `into` is slightly more
explicit than "nothing" in cases like the earlier examples where a
`Color` gets converted to e.g. a `StandardMaterial`, but I personally
don't think `into` adds much value even in this case, and you could
still see the actual type from the asset type.
As for codegen bloat, I doubt it adds that much, but I'm not very
familiar with the details of codegen. I personally value the user-facing
code reduction and ergonomics improvements that these changes would
provide, but it might be worth checking the other effects in more
detail.
Another slight concern is migration pain; apps might have a ton of
`into` calls that would need to be removed, and it did take me a while
to do so for Bevy itself (maybe around 20-40 minutes). However, I think
the fact that there *are* so many `into` calls just highlights that the
API could be made nicer, and I'd gladly migrate my own projects for it.
# Objective
This pull request implements *reflection probes*, which generalize
environment maps to allow for multiple environment maps in the same
scene, each of which has an axis-aligned bounding box. This is a
standard feature of physically-based renderers and was inspired by [the
corresponding feature in Blender's Eevee renderer].
## Solution
This is a minimal implementation of reflection probes that allows
artists to define cuboid bounding regions associated with environment
maps. For every view, on every frame, a system builds up a list of the
nearest 4 reflection probes that are within the view's frustum and
supplies that list to the shader. The PBR fragment shader searches
through the list, finds the first containing reflection probe, and uses
it for indirect lighting, falling back to the view's environment map if
none is found. Both forward and deferred renderers are fully supported.
A reflection probe is an entity with a pair of components, *LightProbe*
and *EnvironmentMapLight* (as well as the standard *SpatialBundle*, to
position it in the world). The *LightProbe* component (along with the
*Transform*) defines the bounding region, while the
*EnvironmentMapLight* component specifies the associated diffuse and
specular cubemaps.
A frequent question is "why two components instead of just one?" The
advantages of this setup are:
1. It's readily extensible to other types of light probes, in particular
*irradiance volumes* (also known as ambient cubes or voxel global
illumination), which use the same approach of bounding cuboids. With a
single component that applies to both reflection probes and irradiance
volumes, we can share the logic that implements falloff and blending
between multiple light probes between both of those features.
2. It reduces duplication between the existing *EnvironmentMapLight* and
these new reflection probes. Systems can treat environment maps attached
to cameras the same way they treat environment maps applied to
reflection probes if they wish.
Internally, we gather up all environment maps in the scene and place
them in a cubemap array. At present, this means that all environment
maps must have the same size, mipmap count, and texture format. A
warning is emitted if this restriction is violated. We could potentially
relax this in the future as part of the automatic mipmap generation
work, which could easily do texture format conversion as part of its
preprocessing.
An easy way to generate reflection probe cubemaps is to bake them in
Blender and use the `export-blender-gi` tool that's part of the
[`bevy-baked-gi`] project. This tool takes a `.blend` file containing
baked cubemaps as input and exports cubemap images, pre-filtered with an
embedded fork of the [glTF IBL Sampler], alongside a corresponding
`.scn.ron` file that the scene spawner can use to recreate the
reflection probes.
Note that this is intentionally a minimal implementation, to aid
reviewability. Known issues are:
* Reflection probes are basically unsupported on WebGL 2, because WebGL
2 has no cubemap arrays. (Strictly speaking, you can have precisely one
reflection probe in the scene if you have no other cubemaps anywhere,
but this isn't very useful.)
* Reflection probes have no falloff, so reflections will abruptly change
when objects move from one bounding region to another.
* As mentioned before, all cubemaps in the world of a given type
(diffuse or specular) must have the same size, format, and mipmap count.
Future work includes:
* Blending between multiple reflection probes.
* A falloff/fade-out region so that reflected objects disappear
gradually instead of vanishing all at once.
* Irradiance volumes for voxel-based global illumination. This should
reuse much of the reflection probe logic, as they're both GI techniques
based on cuboid bounding regions.
* Support for WebGL 2, by breaking batches when reflection probes are
used.
These issues notwithstanding, I think it's best to land this with
roughly the current set of functionality, because this patch is useful
as is and adding everything above would make the pull request
significantly larger and harder to review.
---
## Changelog
### Added
* A new *LightProbe* component is available that specifies a bounding
region that an *EnvironmentMapLight* applies to. The combination of a
*LightProbe* and an *EnvironmentMapLight* offers *reflection probe*
functionality similar to that available in other engines.
[the corresponding feature in Blender's Eevee renderer]:
https://docs.blender.org/manual/en/latest/render/eevee/light_probes/reflection_cubemaps.html
[`bevy-baked-gi`]: https://github.com/pcwalton/bevy-baked-gi
[glTF IBL Sampler]: https://github.com/KhronosGroup/glTF-IBL-Sampler
# Objective
In my code I use a lot of images as render targets.
I'd like some convenience methods for working with this type.
## Solution
- Allow `.into()` to construct a `RenderTarget`
- Add `.as_image()`
---
## Changelog
### Added
- `RenderTarget` can be constructed via `.into()` on a `Handle<Image>`
- `RenderTarget` new method: `as_image`
---------
Signed-off-by: Torstein Grindvik <torstein.grindvik@muybridge.com>
Co-authored-by: Torstein Grindvik <torstein.grindvik@muybridge.com>
# Objective
- No point in keeping Meshes/Images in RAM once they're going to be sent
to the GPU, and kept in VRAM. This saves a _significant_ amount of
memory (several GBs) on scenes like bistro.
- References
- https://github.com/bevyengine/bevy/pull/1782
- https://github.com/bevyengine/bevy/pull/8624
## Solution
- Augment RenderAsset with the capability to unload the underlying asset
after extracting to the render world.
- Mesh/Image now have a cpu_persistent_access field. If this field is
RenderAssetPersistencePolicy::Unload, the asset will be unloaded from
Assets<T>.
- A new AssetEvent is sent upon dropping the last strong handle for the
asset, which signals to the RenderAsset to remove the GPU version of the
asset.
---
## Changelog
- Added `AssetEvent::NoLongerUsed` and
`AssetEvent::is_no_longer_used()`. This event is sent when the last
strong handle of an asset is dropped.
- Rewrote the API for `RenderAsset` to allow for unloading the asset
data from the CPU.
- Added `RenderAssetPersistencePolicy`.
- Added `Mesh::cpu_persistent_access` for memory savings when the asset
is not needed except for on the GPU.
- Added `Image::cpu_persistent_access` for memory savings when the asset
is not needed except for on the GPU.
- Added `ImageLoaderSettings::cpu_persistent_access`.
- Added `ExrTextureLoaderSettings`.
- Added `HdrTextureLoaderSettings`.
## Migration Guide
- Asset loaders (GLTF, etc) now load meshes and textures without
`cpu_persistent_access`. These assets will be removed from
`Assets<Mesh>` and `Assets<Image>` once `RenderAssets<Mesh>` and
`RenderAssets<Image>` contain the GPU versions of these assets, in order
to reduce memory usage. If you require access to the asset data from the
CPU in future frames after the GLTF asset has been loaded, modify all
dependent `Mesh` and `Image` assets and set `cpu_persistent_access` to
`RenderAssetPersistencePolicy::Keep`.
- `Mesh` now requires a new `cpu_persistent_access` field. Set it to
`RenderAssetPersistencePolicy::Keep` to mimic the previous behavior.
- `Image` now requires a new `cpu_persistent_access` field. Set it to
`RenderAssetPersistencePolicy::Keep` to mimic the previous behavior.
- `MorphTargetImage::new()` now requires a new `cpu_persistent_access`
parameter. Set it to `RenderAssetPersistencePolicy::Keep` to mimic the
previous behavior.
- `DynamicTextureAtlasBuilder::add_texture()` now requires that the
`TextureAtlas` you pass has an `Image` with `cpu_persistent_access:
RenderAssetPersistencePolicy::Keep`. Ensure you construct the image
properly for the texture atlas.
- The `RenderAsset` trait has significantly changed, and requires
adapting your existing implementations.
- The trait now requires `Clone`.
- The `ExtractedAsset` associated type has been removed (the type itself
is now extracted).
- The signature of `prepare_asset()` is slightly different
- A new `persistence_policy()` method is now required (return
RenderAssetPersistencePolicy::Unload to match the previous behavior).
- Match on the new `NoLongerUsed` variant for exhaustive matches of
`AssetEvent`.
![Screenshot](https://i.imgur.com/A4KzWFq.png)
# Objective
Lightmaps, textures that store baked global illumination, have been a
mainstay of real-time graphics for decades. Bevy currently has no
support for them, so this pull request implements them.
## Solution
The new `Lightmap` component can be attached to any entity that contains
a `Handle<Mesh>` and a `StandardMaterial`. When present, it will be
applied in the PBR shader. Because multiple lightmaps are frequently
packed into atlases, each lightmap may have its own UV boundaries within
its texture. An `exposure` field is also provided, to control the
brightness of the lightmap.
Note that this PR doesn't provide any way to bake the lightmaps. That
can be done with [The Lightmapper] or another solution, such as Unity's
Bakery.
---
## Changelog
### Added
* A new component, `Lightmap`, is available, for baked global
illumination. If your mesh has a second UV channel (UV1), and you attach
this component to the entity with that mesh, Bevy will apply the texture
referenced in the lightmap.
[The Lightmapper]: https://github.com/Naxela/The_Lightmapper
---------
Co-authored-by: Carter Anderson <mcanders1@gmail.com>
# Objective
- Custom render passes, or future passes in the engine (such as
https://github.com/bevyengine/bevy/pull/10164) need a better way to know
and indicate to the core passes whether the view color/depth/prepass
attachments have been cleared or not yet this frame, to know if they
should clear it themselves or load it.
## Solution
- For all render targets (depth textures, shadow textures, prepass
textures, main textures) use an atomic bool to track whether or not each
texture has been cleared this frame. Abstracted away in the new
ColorAttachment and DepthAttachment wrappers.
---
## Changelog
- Changed `ViewTarget::get_color_attachment()`, removed arguments.
- Changed `ViewTarget::get_unsampled_color_attachment()`, removed
arguments.
- Removed `Camera3d::clear_color`.
- Removed `Camera2d::clear_color`.
- Added `Camera::clear_color`.
- Added `ExtractedCamera::clear_color`.
- Added `ColorAttachment` and `DepthAttachment` wrappers.
- Moved `ClearColor` and `ClearColorConfig` from
`bevy::core_pipeline::clear_color` to `bevy::render::camera`.
- Core render passes now track when a texture is first bound as an
attachment in order to decide whether to clear or load it.
## Migration Guide
- Remove arguments to `ViewTarget::get_color_attachment()` and
`ViewTarget::get_unsampled_color_attachment()`.
- Configure clear color on `Camera` instead of on `Camera3d` and
`Camera2d`.
- Moved `ClearColor` and `ClearColorConfig` from
`bevy::core_pipeline::clear_color` to `bevy::render::camera`.
- `ViewDepthTexture` must now be created via the `new()` method
---------
Co-authored-by: vero <email@atlasdostal.com>
Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
# Objective
Fix ci hang, so we can merge pr's again.
## Solution
- switch ppa action to use mesa stable versions
https://launchpad.net/~kisak/+archive/ubuntu/turtle
- use commit from #11123
---------
Co-authored-by: Stepan Koltsov <stepan.koltsov@gmail.com>
# Objective
- Update winit dependency to 0.29
## Changelog
### KeyCode changes
- Removed `ScanCode`, as it was [replaced by
KeyCode](https://github.com/rust-windowing/winit/blob/master/CHANGELOG.md#0292).
- `ReceivedCharacter.char` is now a `SmolStr`, [relevant
doc](https://docs.rs/winit/latest/winit/event/struct.KeyEvent.html#structfield.text).
- Changed most `KeyCode` values, and added more.
KeyCode has changed meaning. With this PR, it refers to physical
position on keyboard rather than the printed letter on keyboard keys.
In practice this means:
- On QWERTY keyboard layouts, nothing changes
- On any other keyboard layout, `KeyCode` no longer reflects the label
on key.
- This is "good". In bevy 0.12, when you used WASD for movement, users
with non-QWERTY keyboards couldn't play your game! This was especially
bad for non-latin keyboards. Now, WASD represents the physical keys. A
French player will press the ZQSD keys, which are near each other,
Kyrgyz players will use "Цфыв".
- This is "bad" as well. You can't know in advance what the label of the
key for input is. Your UI says "press WASD to move", even if in reality,
they should be pressing "ZQSD" or "Цфыв". You also no longer can use
`KeyCode` for text inputs. In any case, it was a pretty bad API for text
input. You should use `ReceivedCharacter` now instead.
### Other changes
- Use `web-time` rather than `instant` crate.
(https://github.com/rust-windowing/winit/pull/2836)
- winit did split `run_return` in `run_onDemand` and `pump_events`, I
did the same change in bevy_winit and used `pump_events`.
- Removed `return_from_run` from `WinitSettings` as `winit::run` now
returns on supported platforms.
- I left the example "return_after_run" as I think it's still useful.
- This winit change is done partly to allow to create a new window after
quitting all windows: https://github.com/emilk/egui/issues/1918 ; this
PR doesn't address.
- added `width` and `height` properties in the `canvas` from wasm
example
(https://github.com/bevyengine/bevy/pull/10702#discussion_r1420567168)
## Known regressions (important follow ups?)
- Provide an API for reacting when a specific key from current layout
was released.
- possible solutions: use winit::Key from winit::KeyEvent ; mapping
between KeyCode and Key ; or .
- We don't receive characters through alt+numpad (e.g. alt + 151 = "ù")
anymore ; reproduced on winit example "ime". maybe related to
https://github.com/rust-windowing/winit/issues/2945
- (windows) Window content doesn't refresh at all when resizing. By
reading https://github.com/rust-windowing/winit/issues/2900 ; I suspect
we should just fire a `window.request_redraw();` from `AboutToWait`, and
handle actual redrawing within `RedrawRequested`. I'm not sure how to
move all that code so I'd appreciate it to be a follow up.
- (windows) unreleased winit fix for using set_control_flow in
AboutToWait https://github.com/rust-windowing/winit/issues/3215 ; ⚠️ I'm
not sure what the implications are, but that feels bad 🤔
## Follow up
I'd like to avoid bloating this PR, here are a few follow up tasks
worthy of a separate PR, or new issue to track them once this PR is
closed, as they would either complicate reviews, or at risk of being
controversial:
- remove CanvasParentResizePlugin
(https://github.com/bevyengine/bevy/pull/10702#discussion_r1417068856)
- avoid mentionning explicitly winit in docs from bevy_window ?
- NamedKey integration on bevy_input:
https://github.com/rust-windowing/winit/pull/3143 introduced a new
NamedKey variant. I implemented it only on the converters but we'd
benefit making the same changes to bevy_input.
- Add more info in KeyboardInput
https://github.com/bevyengine/bevy/pull/10702#pullrequestreview-1748336313
- https://github.com/bevyengine/bevy/pull/9905 added a workaround on a
bug allegedly fixed by winit 0.29. We should check if it's still
necessary.
- update to raw_window_handle 0.6
- blocked by wgpu
- Rename `KeyCode` to `PhysicalKeyCode`
https://github.com/bevyengine/bevy/pull/10702#discussion_r1404595015
- remove `instant` dependency, [replaced
by](https://github.com/rust-windowing/winit/pull/2836) `web_time`), we'd
need to update to :
- fastrand >= 2.0
- [`async-executor`](https://github.com/smol-rs/async-executor) >= 1.7
- [`futures-lite`](https://github.com/smol-rs/futures-lite) >= 2.0
- Verify license, see
[discussion](https://github.com/bevyengine/bevy/pull/8745#discussion_r1402439800)
- we might be missing a short notice or description of changes made
- Consider using https://github.com/rust-windowing/cursor-icon directly
rather than vendoring it in bevy.
- investigate [this
unwrap](https://github.com/bevyengine/bevy/pull/8745#discussion_r1387044986)
(`winit_window.canvas().unwrap();`)
- Use more good things about winit's update
- https://github.com/bevyengine/bevy/pull/10689#issuecomment-1823560428
## Migration Guide
This PR should have one.
# Objective
add `RenderLayers` awareness to lights. lights default to
`RenderLayers::layer(0)`, and must intersect the camera entity's
`RenderLayers` in order to affect the camera's output.
note that lights already use renderlayers to filter meshes for shadow
casting. this adds filtering lights per view based on intersection of
camera layers and light layers.
fixes#3462
## Solution
PointLights and SpotLights are assigned to individual views in
`assign_lights_to_clusters`, so we simply cull the lights which don't
match the view layers in that function.
DirectionalLights are global, so we
- add the light layers to the `DirectionalLight` struct
- add the view layers to the `ViewUniform` struct
- check for intersection before processing the light in
`apply_pbr_lighting`
potential issue: when mesh/light layers are smaller than the view layers
weird results can occur. e.g:
camera = layers 1+2
light = layers 1
mesh = layers 2
the mesh does not cast shadows wrt the light as (1 & 2) == 0.
the light affects the view as (1+2 & 1) != 0.
the view renders the mesh as (1+2 & 2) != 0.
so the mesh is rendered and lit, but does not cast a shadow.
this could be fixed (so that the light would not affect the mesh in that
view) by adding the light layers to the point and spot light structs,
but i think the setup is pretty unusual, and space is at a premium in
those structs (adding 4 bytes more would reduce the webgl point+spot
light max count to 240 from 256).
I think typical usage is for cameras to have a single layer, and
meshes/lights to maybe have multiple layers to render to e.g. minimaps
as well as primary views.
if there is a good use case for the above setup and we should support
it, please let me know.
---
## Migration Guide
Lights no longer affect all `RenderLayers` by default, now like cameras
and meshes they default to `RenderLayers::layer(0)`. To recover the
previous behaviour and have all lights affect all views, add a
`RenderLayers::all()` component to the light entity.
# Objective
- Resolves#10853
## Solution
- ~~Changed the name of `Input` struct to `PressableInput`.~~
- Changed the name of `Input` struct to `ButtonInput`.
## Migration Guide
- Breaking Change: Users need to rename `Input` to `ButtonInput` in
their projects.
# Objective
A better alternative version of #10843.
Currently, Bevy has a single `Ray` struct for 3D. To allow better
interoperability with Bevy's primitive shapes (#10572) and some third
party crates (that handle e.g. spatial queries), it would be very useful
to have separate versions for 2D and 3D respectively.
## Solution
Separate `Ray` into `Ray2d` and `Ray3d`. These new structs also take
advantage of the new primitives by using `Direction2d`/`Direction3d` for
the direction:
```rust
pub struct Ray2d {
pub origin: Vec2,
pub direction: Direction2d,
}
pub struct Ray3d {
pub origin: Vec3,
pub direction: Direction3d,
}
```
and by using `Plane2d`/`Plane3d` in `intersect_plane`:
```rust
impl Ray2d {
// ...
pub fn intersect_plane(&self, plane_origin: Vec2, plane: Plane2d) -> Option<f32> {
// ...
}
}
```
---
## Changelog
### Added
- `Ray2d` and `Ray3d`
- `Ray2d::new` and `Ray3d::new` constructors
- `Plane2d::new` and `Plane3d::new` constructors
### Removed
- Removed `Ray` in favor of `Ray3d`
### Changed
- `direction` is now a `Direction2d`/`Direction3d` instead of a vector,
which provides guaranteed normalization
- `intersect_plane` now takes a `Plane2d`/`Plane3d` instead of just a
vector for the plane normal
- `Direction2d` and `Direction3d` now derive `Serialize` and
`Deserialize` to preserve ray (de)serialization
## Migration Guide
`Ray` has been renamed to `Ray3d`.
### Ray creation
Before:
```rust
Ray {
origin: Vec3::ZERO,
direction: Vec3::new(0.5, 0.6, 0.2).normalize(),
}
```
After:
```rust
// Option 1:
Ray3d {
origin: Vec3::ZERO,
direction: Direction3d::new(Vec3::new(0.5, 0.6, 0.2)).unwrap(),
}
// Option 2:
Ray3d::new(Vec3::ZERO, Vec3::new(0.5, 0.6, 0.2))
```
### Plane intersections
Before:
```rust
let result = ray.intersect_plane(Vec2::X, Vec2::Y);
```
After:
```rust
let result = ray.intersect_plane(Vec2::X, Plane2d::new(Vec2::Y));
```
# Objective
The name `TextAlignment` is really deceptive and almost every new user
gets confused about the differences between aligning text with
`TextAlignment`, aligning text with `Style` and aligning text with
anchor (when using `Text2d`).
## Solution
* Rename `TextAlignment` to `JustifyText`. The associated helper methods
are also renamed.
* Improve the doc comments for text explaining explicitly how the
`JustifyText` component affects the arrangement of text.
* Add some extra cases to the `text_debug` example that demonstate the
differences between alignment using `JustifyText` and alignment using
`Style`.
<img width="757" alt="text_debug_2"
src="https://github.com/bevyengine/bevy/assets/27962798/9d53e647-93f9-4bc7-8a20-0d9f783304d2">
---
## Changelog
* `TextAlignment` has been renamed to `JustifyText`
* `TextBundle::with_text_alignment` has been renamed to
`TextBundle::with_text_justify`
* `Text::with_alignment` has been renamed to `Text::with_justify`
* The `text_alignment` field of `TextMeasureInfo` has been renamed to
`justification`
## Migration Guide
* `TextAlignment` has been renamed to `JustifyText`
* `TextBundle::with_text_alignment` has been renamed to
`TextBundle::with_text_justify`
* `Text::with_alignment` has been renamed to `Text::with_justify`
* The `text_alignment` field of `TextMeasureInfo` has been renamed to
`justification`
# Objective
I tried setting `ClearColorConfig` in my app via `Color::FOO.into()`
expecting it to work, but the impl was missing.
## Solution
- Add `impl From<Color> for ClearColorConfig`
- Change examples to use this impl
## Changelog
### Added
- `ClearColorConfig` can be constructed via `.into()` on a `Color`
---------
Signed-off-by: Torstein Grindvik <torstein.grindvik@muybridge.com>
Co-authored-by: Torstein Grindvik <torstein.grindvik@muybridge.com>
# Objective
- Fix adding `#![allow(clippy::type_complexity)]` everywhere. like #9796
## Solution
- Use the new [lints] table that will land in 1.74
(https://doc.rust-lang.org/nightly/cargo/reference/unstable.html#lints)
- inherit lint to the workspace, crates and examples.
```
[lints]
workspace = true
```
## Changelog
- Bump rust version to 1.74
- Enable lints table for the workspace
```toml
[workspace.lints.clippy]
type_complexity = "allow"
```
- Allow type complexity for all crates and examples
```toml
[lints]
workspace = true
```
---------
Co-authored-by: Martín Maita <47983254+mnmaita@users.noreply.github.com>
## Objective
- Add an arrow gizmo as suggested by #9400
## Solution
(excuse my Protomen music)
https://github.com/bevyengine/bevy/assets/14184826/192adf24-079f-4a4b-a17b-091e892974ec
Wasn't horribly hard when i remembered i can change coordinate systems
whenever I want. Gave them four tips (as suggested by @alice-i-cecile in
discord) instead of trying to decide what direction the tips should
point.
Made the tip length default to 1/10 of the arrow's length, which looked
good enough to me. Hard-coded the angle from the body to the tips to 45
degrees.
## Still TODO
- [x] actual doc comments
- [x] doctests
- [x] `ArrowBuilder.with_tip_length()`
---
## Changelog
- Added `gizmos.arrow()` and `gizmos.arrow_2d()`
- Added arrows to `2d_gizmos` and `3d_gizmos` examples
## Migration Guide
N/A
---------
Co-authored-by: Nicola Papale <nicopap@users.noreply.github.com>
# Objective
- Changes the default clear color to match the code block color on
Bevy's website.
## Solution
- Changed the clear color, updated text in examples to ensure adequate
contrast. Inconsistent usage of white text color set to use the default
color instead, which is already white.
- Additionally, updated the `3d_scene` example to make it look a bit
better, and use bevy's branding colors.
![image](https://github.com/bevyengine/bevy/assets/2632925/540a22c0-826c-4c33-89aa-34905e3e313a)
# Objective
<img width="1920" alt="Screenshot 2023-04-26 at 01 07 34"
src="https://user-images.githubusercontent.com/418473/234467578-0f34187b-5863-4ea1-88e9-7a6bb8ce8da3.png">
This PR adds both diffuse and specular light transmission capabilities
to the `StandardMaterial`, with support for screen space refractions.
This enables realistically representing a wide range of real-world
materials, such as:
- Glass; (Including frosted glass)
- Transparent and translucent plastics;
- Various liquids and gels;
- Gemstones;
- Marble;
- Wax;
- Paper;
- Leaves;
- Porcelain.
Unlike existing support for transparency, light transmission does not
rely on fixed function alpha blending, and therefore works with both
`AlphaMode::Opaque` and `AlphaMode::Mask` materials.
## Solution
- Introduces a number of transmission related fields in the
`StandardMaterial`;
- For specular transmission:
- Adds logic to take a view main texture snapshot after the opaque
phase; (in order to perform screen space refractions)
- Introduces a new `Transmissive3d` phase to the renderer, to which all
meshes with `transmission > 0.0` materials are sent.
- Calculates a light exit point (of the approximate mesh volume) using
`ior` and `thickness` properties
- Samples the snapshot texture with an adaptive number of taps across a
`roughness`-controlled radius enabling “blurry” refractions
- For diffuse transmission:
- Approximates transmitted diffuse light by using a second, flipped +
displaced, diffuse-only Lambertian lobe for each light source.
## To Do
- [x] Figure out where `fresnel_mix()` is taking place, if at all, and
where `dielectric_specular` is being calculated, if at all, and update
them to use the `ior` value (Not a blocker, just a nice-to-have for more
correct BSDF)
- To the _best of my knowledge, this is now taking place, after
964340cdd. The fresnel mix is actually "split" into two parts in our
implementation, one `(1 - fresnel(...))` in the transmission, and
`fresnel()` in the light implementations. A surface with more
reflectance now will produce slightly dimmer transmission towards the
grazing angle, as more of the light gets reflected.
- [x] Add `transmission_texture`
- [x] Add `diffuse_transmission_texture`
- [x] Add `thickness_texture`
- [x] Add `attenuation_distance` and `attenuation_color`
- [x] Connect values to glTF loader
- [x] `transmission` and `transmission_texture`
- [x] `thickness` and `thickness_texture`
- [x] `ior`
- [ ] `diffuse_transmission` and `diffuse_transmission_texture` (needs
upstream support in `gltf` crate, not a blocker)
- [x] Add support for multiple screen space refraction “steps”
- [x] Conditionally create no transmission snapshot texture at all if
`steps == 0`
- [x] Conditionally enable/disable screen space refraction transmission
snapshots
- [x] Read from depth pre-pass to prevent refracting pixels in front of
the light exit point
- [x] Use `interleaved_gradient_noise()` function for sampling blur in a
way that benefits from TAA
- [x] Drill down a TAA `#define`, tweak some aspects of the effect
conditionally based on it
- [x] Remove const array that's crashing under HLSL (unless a new `naga`
release with https://github.com/gfx-rs/naga/pull/2496 comes out before
we merge this)
- [ ] Look into alternatives to the `switch` hack for dynamically
indexing the const array (might not be needed, compilers seem to be
decent at expanding it)
- [ ] Add pipeline keys for gating transmission (do we really want/need
this?)
- [x] Tweak some material field/function names?
## A Note on Texture Packing
_This was originally added as a comment to the
`specular_transmission_texture`, `thickness_texture` and
`diffuse_transmission_texture` documentation, I removed it since it was
more confusing than helpful, and will likely be made redundant/will need
to be updated once we have a better infrastructure for preprocessing
assets_
Due to how channels are mapped, you can more efficiently use a single
shared texture image
for configuring the following:
- R - `specular_transmission_texture`
- G - `thickness_texture`
- B - _unused_
- A - `diffuse_transmission_texture`
The `KHR_materials_diffuse_transmission` glTF extension also defines a
`diffuseTransmissionColorTexture`,
that _we don't currently support_. One might choose to pack the
intensity and color textures together,
using RGB for the color and A for the intensity, in which case this
packing advice doesn't really apply.
---
## Changelog
- Added a new `Transmissive3d` render phase for rendering specular
transmissive materials with screen space refractions
- Added rendering support for transmitted environment map light on the
`StandardMaterial` as a fallback for screen space refractions
- Added `diffuse_transmission`, `specular_transmission`, `thickness`,
`ior`, `attenuation_distance` and `attenuation_color` to the
`StandardMaterial`
- Added `diffuse_transmission_texture`, `specular_transmission_texture`,
`thickness_texture` to the `StandardMaterial`, gated behind a new
`pbr_transmission_textures` cargo feature (off by default, for maximum
hardware compatibility)
- Added `Camera3d::screen_space_specular_transmission_steps` for
controlling the number of “layers of transparency” rendered for
transmissive objects
- Added a `TransmittedShadowReceiver` component for enabling shadows in
(diffusely) transmitted light. (disabled by default, as it requires
carefully setting up the `thickness` to avoid self-shadow artifacts)
- Added support for the `KHR_materials_transmission`,
`KHR_materials_ior` and `KHR_materials_volume` glTF extensions
- Renamed items related to temporal jitter for greater consistency
## Migration Guide
- `SsaoPipelineKey::temporal_noise` has been renamed to
`SsaoPipelineKey::temporal_jitter`
- The `TAA` shader def (controlled by the presence of the
`TemporalAntiAliasSettings` component in the camera) has been replaced
with the `TEMPORAL_JITTER` shader def (controlled by the presence of the
`TemporalJitter` component in the camera)
- `MeshPipelineKey::TAA` has been replaced by
`MeshPipelineKey::TEMPORAL_JITTER`
- The `TEMPORAL_NOISE` shader def has been consolidated with
`TEMPORAL_JITTER`
# Objective
- Example `deferred_rendering` sometimes fail to render in CI
- Make it easier to render
## Solution
- Reduce the complexity of the sphere used
# Objective
- Build on the changes in https://github.com/bevyengine/bevy/pull/9982
- Use `ImageSamplerDescriptor` as the "public image sampler descriptor"
interface in all places (for consistency)
- Make it possible to configure textures to use the "default" sampler
(as configured in the `DefaultImageSampler` resource)
- Fix a bug introduced in #9982 that prevents configured samplers from
being used in Basis, KTX2, and DDS textures
---
## Migration Guide
- When using the `Image` API, use `ImageSamplerDescriptor` instead of
`wgpu::SamplerDescriptor`
- If writing custom wgpu renderer features that work with `Image`, call
`&image_sampler.as_wgpu()` to convert to a wgpu descriptor.
# Objective
A follow-up PR for https://github.com/bevyengine/bevy/pull/10221
## Changelog
Replaced usages of texture_descriptor.size with the helper methods of
`Image` through the entire engine codebase
# Objective
Fog color was passed to shaders without conversion from sRGB to linear
color space. Because shaders expect colors in linear space this resulted
in wrong color being used. This is most noticeable in open scenes with
dark fog color and clear color set to the same color. In such case
background/clear color (which is properly processed) is going to be
darker than very far objects.
Example:
![image](https://github.com/bevyengine/bevy/assets/160391/89b70d97-b2d0-4bc5-80f4-c9e8b8801c4c)
[bevy-fog-color-bug.zip](https://github.com/bevyengine/bevy/files/13063718/bevy-fog-color-bug.zip)
## Solution
Add missing conversion of fog color to linear color space.
---
## Changelog
* Fixed conversion of fog color
## Migration Guide
- Colors in `FogSettings` struct (`color` and `directional_light_color`)
are now sent to the GPU in linear space. If you were using
`Color::rgb()`/`Color::rgba()` and would like to retain the previous
colors, you can quickly fix it by switching to
`Color::rgb_linear()`/`Color::rgba_linear()`.
# Objective
To get the width or height of an image you do:
```rust
self.texture_descriptor.size.{width, height}
```
that is quite verbose.
This PR adds some convenient methods for Image to reduce verbosity.
## Changelog
* Add a `width()` method for getting the width of an image.
* Add a `height()` method for getting the height of an image.
* Rename the `size()` method to `size_f32()`.
* Add a `size()` method for getting the size of an image as u32.
* Renamed the `aspect_2d()` method to `aspect_ratio()`.
## Migration Guide
Replace calls to the `Image::size()` method with `size_f32()`.
Replace calls to the `Image::aspect_2d()` method with `aspect_ratio()`.
# Objective
- Make it possible to move the light position around in the
`shadow_biases` example
- Also support resetting the depth/normal biases to the engine defaults,
or zero.
## Solution
- The light position is displayed in the text overlay.
- The light position can be adjusted with
left/right/up/down/pgup/pgdown.
- The depth/normal biases can be reset to defaults by pressing R, or to
zero by pressing Z.
---------
Co-authored-by: robtfm <50659922+robtfm@users.noreply.github.com>
# Objective
- Demonstrate the different shadow PCF methods in the `shadow_biases`
example
## Solution
- Cycle through the available methods when pressing the `F` key
- Display which filter method is being used
# Objective
- Add a [Deferred
Renderer](https://en.wikipedia.org/wiki/Deferred_shading) to Bevy.
- This allows subsequent passes to access per pixel material information
before/during shading.
- Accessing this per pixel material information is needed for some
features, like GI. It also makes other features (ex. Decals) simpler to
implement and/or improves their capability. There are multiple
approaches to accomplishing this. The deferred shading approach works
well given the limitations of WebGPU and WebGL2.
Motivation: [I'm working on a GI solution for
Bevy](https://youtu.be/eH1AkL-mwhI)
# Solution
- The deferred renderer is implemented with a prepass and a deferred
lighting pass.
- The prepass renders opaque objects into the Gbuffer attachment
(`Rgba32Uint`). The PBR shader generates a `PbrInput` in mostly the same
way as the forward implementation and then [packs it into the
Gbuffer](ec1465559f/crates/bevy_pbr/src/render/pbr.wgsl (L168)).
- The deferred lighting pass unpacks the `PbrInput` and [feeds it into
the pbr()
function](ec1465559f/crates/bevy_pbr/src/deferred/deferred_lighting.wgsl (L65)),
then outputs the shaded color data.
- There is now a resource
[DefaultOpaqueRendererMethod](ec1465559f/crates/bevy_pbr/src/material.rs (L599))
that can be used to set the default render method for opaque materials.
If materials return `None` from
[opaque_render_method()](ec1465559f/crates/bevy_pbr/src/material.rs (L131))
the `DefaultOpaqueRendererMethod` will be used. Otherwise, custom
materials can also explicitly choose to only support Deferred or Forward
by returning the respective
[OpaqueRendererMethod](ec1465559f/crates/bevy_pbr/src/material.rs (L603))
- Deferred materials can be used seamlessly along with both opaque and
transparent forward rendered materials in the same scene. The [deferred
rendering
example](https://github.com/DGriffin91/bevy/blob/deferred/examples/3d/deferred_rendering.rs)
does this.
- The deferred renderer does not support MSAA. If any deferred materials
are used, MSAA must be disabled. Both TAA and FXAA are supported.
- Deferred rendering supports WebGL2/WebGPU.
## Custom deferred materials
- Custom materials can support both deferred and forward at the same
time. The
[StandardMaterial](ec1465559f/crates/bevy_pbr/src/render/pbr.wgsl (L166))
does this. So does [this
example](https://github.com/DGriffin91/bevy_glowy_orb_tutorial/blob/deferred/assets/shaders/glowy.wgsl#L56).
- Custom deferred materials that require PBR lighting can create a
`PbrInput`, write it to the deferred GBuffer and let it be rendered by
the `PBRDeferredLightingPlugin`.
- Custom deferred materials that require custom lighting have two
options:
1. Use the base_color channel of the `PbrInput` combined with the
`STANDARD_MATERIAL_FLAGS_UNLIT_BIT` flag.
[Example.](https://github.com/DGriffin91/bevy_glowy_orb_tutorial/blob/deferred/assets/shaders/glowy.wgsl#L56)
(If the unlit bit is set, the base_color is stored as RGB9E5 for extra
precision)
2. A Custom Deferred Lighting pass can be created, either overriding the
default, or running in addition. The a depth buffer is used to limit
rendering to only the required fragments for each deferred lighting
pass. Materials can set their respective depth id via the
[deferred_lighting_pass_id](b79182d2a3/crates/bevy_pbr/src/prepass/prepass_io.wgsl (L95))
attachment. The custom deferred lighting pass plugin can then set [its
corresponding
depth](ec1465559f/crates/bevy_pbr/src/deferred/deferred_lighting.wgsl (L37)).
Then with the lighting pass using
[CompareFunction::Equal](ec1465559f/crates/bevy_pbr/src/deferred/mod.rs (L335)),
only the fragments with a depth that equal the corresponding depth
written in the material will be rendered.
Custom deferred lighting plugins can also be created to render the
StandardMaterial. The default deferred lighting plugin can be bypassed
with `DefaultPlugins.set(PBRDeferredLightingPlugin { bypass: true })`
---------
Co-authored-by: nickrart <nickolas.g.russell@gmail.com>
# Objective
- Use the `Material` abstraction for the Wireframes
- Right now this doesn't have many benefits other than simplifying some
of the rendering code
- We can reuse the default vertex shader and avoid rendering
inconsistencies
- The goal is to have a material with a color on each mesh so this
approach will make it easier to implement
- Originally done in https://github.com/bevyengine/bevy/pull/5303 but I
decided to split the Material part to it's own PR and then adding
per-entity colors and globally configurable colors will be a much
simpler diff.
## Solution
- Use the new `Material` abstraction for the Wireframes
## Notes
It's possible this isn't ideal since this adds a
`Handle<WireframeMaterial>` to all the meshes compared to the original
approach that didn't need anything. I didn't notice any performance
impact on my machine.
This might be a surprising usage of `Material` at first, because
intuitively you only have one material per mesh, but the way it's
implemented you can have as many different types of materials as you
want on a mesh.
## Migration Guide
`WireframePipeline` was removed. If you were using it directly, please
create an issue explaining your use case.
---------
Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
# Objective
- This PR aims to make creating meshes a little bit more ergonomic,
specifically by removing the need for intermediate mutable variables.
## Solution
- We add methods that consume the `Mesh` and return a mesh with the
specified changes, so that meshes can be entirely constructed via
builder-style calls, without intermediate variables;
- Methods are flagged with `#[must_use]` to ensure proper use;
- Examples are updated to use the new methods where applicable. Some
examples are kept with the mutating methods so that users can still
easily discover them, and also where the new methods wouldn't really be
an improvement.
## Examples
Before:
```rust
let mut mesh = Mesh::new(PrimitiveTopology::TriangleList);
mesh.insert_attribute(Mesh::ATTRIBUTE_POSITION, vs);
mesh.insert_attribute(Mesh::ATTRIBUTE_NORMAL, vns);
mesh.insert_attribute(Mesh::ATTRIBUTE_UV_0, vts);
mesh.set_indices(Some(Indices::U32(tris)));
mesh
```
After:
```rust
Mesh::new(PrimitiveTopology::TriangleList)
.with_inserted_attribute(Mesh::ATTRIBUTE_POSITION, vs)
.with_inserted_attribute(Mesh::ATTRIBUTE_NORMAL, vns)
.with_inserted_attribute(Mesh::ATTRIBUTE_UV_0, vts)
.with_indices(Some(Indices::U32(tris)))
```
Before:
```rust
let mut cube = Mesh::from(shape::Cube { size: 1.0 });
cube.generate_tangents().unwrap();
PbrBundle {
mesh: meshes.add(cube),
..default()
}
```
After:
```rust
PbrBundle {
mesh: meshes.add(
Mesh::from(shape::Cube { size: 1.0 })
.with_generated_tangents()
.unwrap(),
),
..default()
}
```
---
## Changelog
- Added consuming builder methods for more ergonomic `Mesh` creation:
`with_inserted_attribute()`, `with_removed_attribute()`,
`with_indices()`, `with_duplicated_vertices()`,
`with_computed_flat_normals()`, `with_generated_tangents()`,
`with_morph_targets()`, `with_morph_target_names()`.
# Objective
https://github.com/bevyengine/bevy/pull/7328 introduced an API to
override the global wireframe config. I believe it is flawed for a few
reasons.
This PR uses a non-breaking API. Instead of making the `Wireframe` an
enum I introduced the `NeverRenderWireframe` component. Here's the
reason why I think this is better:
- Easier to migrate since it doesn't change the old behaviour.
Essentially nothing to migrate. Right now this PR is a breaking change
but I don't think it has to be.
- It's similar to other "per mesh" rendering features like
NotShadowCaster/NotShadowReceiver
- It doesn't force new users to also think about global vs not global if
all they want is to render a wireframe
- This would also let you filter at the query definition level instead
of filtering when running the query
## Solution
- Introduce a `NeverRenderWireframe` component that ignores the global
config
---
## Changelog
- Added a `NeverRenderWireframe` component that ignores the global
`WireframeConfig`
# Objective
Allow the user to choose between "Add wireframes to these specific
entities" or "Add wireframes to everything _except_ these specific
entities".
Fixes#7309
# Solution
Make the `Wireframe` component act like an override to the global
configuration.
Having `global` set to `false`, and adding a `Wireframe` with `enable:
true` acts exactly as before.
But now the opposite is also possible: Set `global` to `true` and add a
`Wireframe` with `enable: false` will draw wireframes for everything
_except_ that entity.
Updated the example to show how overriding the global config works.
# Objective
- See fewer warnings when running `cargo clippy` locally.
## Solution
- allow `clippy::type_complexity` in more places, which also signals to
users they should do the same.
This is a duplicate of #9632, it was created since I forgot to make a
new branch when I first made this PR, so I was having trouble resolving
merge conflicts, meaning I had to rebuild my PR.
# Objective
- Allow other plugins to create the renderer resources. An example of
where this would be required is my [OpenXR
plugin](https://github.com/awtterpip/bevy_openxr)
## Solution
- Changed the bevy RenderPlugin to optionally take precreated render
resources instead of a configuration.
## Migration Guide
The `RenderPlugin` now takes a `RenderCreation` enum instead of
`WgpuSettings`. `RenderSettings::default()` returns
`RenderSettings::Automatic(WgpuSettings::default())`. `RenderSettings`
also implements `From<WgpuSettings>`.
```rust
// before
RenderPlugin {
wgpu_settings: WgpuSettings {
...
},
}
// now
RenderPlugin {
render_creation: RenderCreation::Automatic(WgpuSettings {
...
}),
}
// or
RenderPlugin {
render_creation: WgpuSettings {
...
}.into(),
}
```
---------
Co-authored-by: Malek <pocmalek@gmail.com>
Co-authored-by: Robert Swain <robert.swain@gmail.com>
# Bevy Asset V2 Proposal
## Why Does Bevy Need A New Asset System?
Asset pipelines are a central part of the gamedev process. Bevy's
current asset system is missing a number of features that make it
non-viable for many classes of gamedev. After plenty of discussions and
[a long community feedback
period](https://github.com/bevyengine/bevy/discussions/3972), we've
identified a number missing features:
* **Asset Preprocessing**: it should be possible to "preprocess" /
"compile" / "crunch" assets at "development time" rather than when the
game starts up. This enables offloading expensive work from deployed
apps, faster asset loading, less runtime memory usage, etc.
* **Per-Asset Loader Settings**: Individual assets cannot define their
own loaders that override the defaults. Additionally, they cannot
provide per-asset settings to their loaders. This is a huge limitation,
as many asset types don't provide all information necessary for Bevy
_inside_ the asset. For example, a raw PNG image says nothing about how
it should be sampled (ex: linear vs nearest).
* **Asset `.meta` files**: assets should have configuration files stored
adjacent to the asset in question, which allows the user to configure
asset-type-specific settings. These settings should be accessible during
the pre-processing phase. Modifying a `.meta` file should trigger a
re-processing / re-load of the asset. It should be possible to configure
asset loaders from the meta file.
* **Processed Asset Hot Reloading**: Changes to processed assets (or
their dependencies) should result in re-processing them and re-loading
the results in live Bevy Apps.
* **Asset Dependency Tracking**: The current bevy_asset has no good way
to wait for asset dependencies to load. It punts this as an exercise for
consumers of the loader apis, which is unreasonable and error prone.
There should be easy, ergonomic ways to wait for assets to load and
block some logic on an asset's entire dependency tree loading.
* **Runtime Asset Loading**: it should be (optionally) possible to load
arbitrary assets dynamically at runtime. This necessitates being able to
deploy and run the asset server alongside Bevy Apps on _all platforms_.
For example, we should be able to invoke the shader compiler at runtime,
stream scenes from sources like the internet, etc. To keep deployed
binaries (and startup times) small, the runtime asset server
configuration should be configurable with different settings compared to
the "pre processor asset server".
* **Multiple Backends**: It should be possible to load assets from
arbitrary sources (filesystems, the internet, remote asset serves, etc).
* **Asset Packing**: It should be possible to deploy assets in
compressed "packs", which makes it easier and more efficient to
distribute assets with Bevy Apps.
* **Asset Handoff**: It should be possible to hold a "live" asset
handle, which correlates to runtime data, without actually holding the
asset in memory. Ex: it must be possible to hold a reference to a GPU
mesh generated from a "mesh asset" without keeping the mesh data in CPU
memory
* **Per-Platform Processed Assets**: Different platforms and app
distributions have different capabilities and requirements. Some
platforms need lower asset resolutions or different asset formats to
operate within the hardware constraints of the platform. It should be
possible to define per-platform asset processing profiles. And it should
be possible to deploy only the assets required for a given platform.
These features have architectural implications that are significant
enough to require a full rewrite. The current Bevy Asset implementation
got us this far, but it can take us no farther. This PR defines a brand
new asset system that implements most of these features, while laying
the foundations for the remaining features to be built.
## Bevy Asset V2
Here is a quick overview of the features introduced in this PR.
* **Asset Preprocessing**: Preprocess assets at development time into
more efficient (and configurable) representations
* **Dependency Aware**: Dependencies required to process an asset are
tracked. If an asset's processed dependency changes, it will be
reprocessed
* **Hot Reprocessing/Reloading**: detect changes to asset source files,
reprocess them if they have changed, and then hot-reload them in Bevy
Apps.
* **Only Process Changes**: Assets are only re-processed when their
source file (or meta file) has changed. This uses hashing and timestamps
to avoid processing assets that haven't changed.
* **Transactional and Reliable**: Uses write-ahead logging (a technique
commonly used by databases) to recover from crashes / forced-exits.
Whenever possible it avoids full-reprocessing / only uncompleted
transactions will be reprocessed. When the processor is running in
parallel with a Bevy App, processor asset writes block Bevy App asset
reads. Reading metadata + asset bytes is guaranteed to be transactional
/ correctly paired.
* **Portable / Run anywhere / Database-free**: The processor does not
rely on an in-memory database (although it uses some database techniques
for reliability). This is important because pretty much all in-memory
databases have unsupported platforms or build complications.
* **Configure Processor Defaults Per File Type**: You can say "use this
processor for all files of this type".
* **Custom Processors**: The `Processor` trait is flexible and
unopinionated. It can be implemented by downstream plugins.
* **LoadAndSave Processors**: Most asset processing scenarios can be
expressed as "run AssetLoader A, save the results using AssetSaver X,
and then load the result using AssetLoader B". For example, load this
png image using `PngImageLoader`, which produces an `Image` asset and
then save it using `CompressedImageSaver` (which also produces an
`Image` asset, but in a compressed format), which takes an `Image` asset
as input. This means if you have an `AssetLoader` for an asset, you are
already half way there! It also means that you can share AssetSavers
across multiple loaders. Because `CompressedImageSaver` accepts Bevy's
generic Image asset as input, it means you can also use it with some
future `JpegImageLoader`.
* **Loader and Saver Settings**: Asset Loaders and Savers can now define
their own settings types, which are passed in as input when an asset is
loaded / saved. Each asset can define its own settings.
* **Asset `.meta` files**: configure asset loaders, their settings,
enable/disable processing, and configure processor settings
* **Runtime Asset Dependency Tracking** Runtime asset dependencies (ex:
if an asset contains a `Handle<Image>`) are tracked by the asset server.
An event is emitted when an asset and all of its dependencies have been
loaded
* **Unprocessed Asset Loading**: Assets do not require preprocessing.
They can be loaded directly. A processed asset is just a "normal" asset
with some extra metadata. Asset Loaders don't need to know or care about
whether or not an asset was processed.
* **Async Asset IO**: Asset readers/writers use async non-blocking
interfaces. Note that because Rust doesn't yet support async traits,
there is a bit of manual Boxing / Future boilerplate. This will
hopefully be removed in the near future when Rust gets async traits.
* **Pluggable Asset Readers and Writers**: Arbitrary asset source
readers/writers are supported, both by the processor and the asset
server.
* **Better Asset Handles**
* **Single Arc Tree**: Asset Handles now use a single arc tree that
represents the lifetime of the asset. This makes their implementation
simpler, more efficient, and allows us to cheaply attach metadata to
handles. Ex: the AssetPath of a handle is now directly accessible on the
handle itself!
* **Const Typed Handles**: typed handles can be constructed in a const
context. No more weird "const untyped converted to typed at runtime"
patterns!
* **Handles and Ids are Smaller / Faster To Hash / Compare**: Typed
`Handle<T>` is now much smaller in memory and `AssetId<T>` is even
smaller.
* **Weak Handle Usage Reduction**: In general Handles are now considered
to be "strong". Bevy features that previously used "weak `Handle<T>`"
have been ported to `AssetId<T>`, which makes it statically clear that
the features do not hold strong handles (while retaining strong type
information). Currently Handle::Weak still exists, but it is very
possible that we can remove that entirely.
* **Efficient / Dense Asset Ids**: Assets now have efficient dense
runtime asset ids, which means we can avoid expensive hash lookups.
Assets are stored in Vecs instead of HashMaps. There are now typed and
untyped ids, which means we no longer need to store dynamic type
information in the ID for typed handles. "AssetPathId" (which was a
nightmare from a performance and correctness standpoint) has been
entirely removed in favor of dense ids (which are retrieved for a path
on load)
* **Direct Asset Loading, with Dependency Tracking**: Assets that are
defined at runtime can still have their dependencies tracked by the
Asset Server (ex: if you create a material at runtime, you can still
wait for its textures to load). This is accomplished via the (currently
optional) "asset dependency visitor" trait. This system can also be used
to define a set of assets to load, then wait for those assets to load.
* **Async folder loading**: Folder loading also uses this system and
immediately returns a handle to the LoadedFolder asset, which means
folder loading no longer blocks on directory traversals.
* **Improved Loader Interface**: Loaders now have a specific "top level
asset type", which makes returning the top-level asset simpler and
statically typed.
* **Basic Image Settings and Processing**: Image assets can now be
processed into the gpu-friendly Basic Universal format. The ImageLoader
now has a setting to define what format the image should be loaded as.
Note that this is just a minimal MVP ... plenty of additional work to do
here. To demo this, enable the `basis-universal` feature and turn on
asset processing.
* **Simpler Audio Play / AudioSink API**: Asset handle providers are
cloneable, which means the Audio resource can mint its own handles. This
means you can now do `let sink_handle = audio.play(music)` instead of
`let sink_handle = audio_sinks.get_handle(audio.play(music))`. Note that
this might still be replaced by
https://github.com/bevyengine/bevy/pull/8424.
**Removed Handle Casting From Engine Features**: Ex: FontAtlases no
longer use casting between handle types
## Using The New Asset System
### Normal Unprocessed Asset Loading
By default the `AssetPlugin` does not use processing. It behaves pretty
much the same way as the old system.
If you are defining a custom asset, first derive `Asset`:
```rust
#[derive(Asset)]
struct Thing {
value: String,
}
```
Initialize the asset:
```rust
app.init_asset:<Thing>()
```
Implement a new `AssetLoader` for it:
```rust
#[derive(Default)]
struct ThingLoader;
#[derive(Serialize, Deserialize, Default)]
pub struct ThingSettings {
some_setting: bool,
}
impl AssetLoader for ThingLoader {
type Asset = Thing;
type Settings = ThingSettings;
fn load<'a>(
&'a self,
reader: &'a mut Reader,
settings: &'a ThingSettings,
load_context: &'a mut LoadContext,
) -> BoxedFuture<'a, Result<Thing, anyhow::Error>> {
Box::pin(async move {
let mut bytes = Vec::new();
reader.read_to_end(&mut bytes).await?;
// convert bytes to value somehow
Ok(Thing {
value
})
})
}
fn extensions(&self) -> &[&str] {
&["thing"]
}
}
```
Note that this interface will get much cleaner once Rust gets support
for async traits. `Reader` is an async futures_io::AsyncRead. You can
stream bytes as they come in or read them all into a `Vec<u8>`,
depending on the context. You can use `let handle =
load_context.load(path)` to kick off a dependency load, retrieve a
handle, and register the dependency for the asset.
Then just register the loader in your Bevy app:
```rust
app.init_asset_loader::<ThingLoader>()
```
Now just add your `Thing` asset files into the `assets` folder and load
them like this:
```rust
fn system(asset_server: Res<AssetServer>) {
let handle = Handle<Thing> = asset_server.load("cool.thing");
}
```
You can check load states directly via the asset server:
```rust
if asset_server.load_state(&handle) == LoadState::Loaded { }
```
You can also listen for events:
```rust
fn system(mut events: EventReader<AssetEvent<Thing>>, handle: Res<SomeThingHandle>) {
for event in events.iter() {
if event.is_loaded_with_dependencies(&handle) {
}
}
}
```
Note the new `AssetEvent::LoadedWithDependencies`, which only fires when
the asset is loaded _and_ all dependencies (and their dependencies) have
loaded.
Unlike the old asset system, for a given asset path all `Handle<T>`
values point to the same underlying Arc. This means Handles can cheaply
hold more asset information, such as the AssetPath:
```rust
// prints the AssetPath of the handle
info!("{:?}", handle.path())
```
### Processed Assets
Asset processing can be enabled via the `AssetPlugin`. When developing
Bevy Apps with processed assets, do this:
```rust
app.add_plugins(DefaultPlugins.set(AssetPlugin::processed_dev()))
```
This runs the `AssetProcessor` in the background with hot-reloading. It
reads assets from the `assets` folder, processes them, and writes them
to the `.imported_assets` folder. Asset loads in the Bevy App will wait
for a processed version of the asset to become available. If an asset in
the `assets` folder changes, it will be reprocessed and hot-reloaded in
the Bevy App.
When deploying processed Bevy apps, do this:
```rust
app.add_plugins(DefaultPlugins.set(AssetPlugin::processed()))
```
This does not run the `AssetProcessor` in the background. It behaves
like `AssetPlugin::unprocessed()`, but reads assets from
`.imported_assets`.
When the `AssetProcessor` is running, it will populate sibling `.meta`
files for assets in the `assets` folder. Meta files for assets that do
not have a processor configured look like this:
```rust
(
meta_format_version: "1.0",
asset: Load(
loader: "bevy_render::texture::image_loader::ImageLoader",
settings: (
format: FromExtension,
),
),
)
```
This is metadata for an image asset. For example, if you have
`assets/my_sprite.png`, this could be the metadata stored at
`assets/my_sprite.png.meta`. Meta files are totally optional. If no
metadata exists, the default settings will be used.
In short, this file says "load this asset with the ImageLoader and use
the file extension to determine the image type". This type of meta file
is supported in all AssetPlugin modes. If in `Unprocessed` mode, the
asset (with the meta settings) will be loaded directly. If in
`ProcessedDev` mode, the asset file will be copied directly to the
`.imported_assets` folder. The meta will also be copied directly to the
`.imported_assets` folder, but with one addition:
```rust
(
meta_format_version: "1.0",
processed_info: Some((
hash: 12415480888597742505,
full_hash: 14344495437905856884,
process_dependencies: [],
)),
asset: Load(
loader: "bevy_render::texture::image_loader::ImageLoader",
settings: (
format: FromExtension,
),
),
)
```
`processed_info` contains `hash` (a direct hash of the asset and meta
bytes), `full_hash` (a hash of `hash` and the hashes of all
`process_dependencies`), and `process_dependencies` (the `path` and
`full_hash` of every process_dependency). A "process dependency" is an
asset dependency that is _directly_ used when processing the asset.
Images do not have process dependencies, so this is empty.
When the processor is enabled, you can use the `Process` metadata
config:
```rust
(
meta_format_version: "1.0",
asset: Process(
processor: "bevy_asset::processor::process::LoadAndSave<bevy_render::texture::image_loader::ImageLoader, bevy_render::texture::compressed_image_saver::CompressedImageSaver>",
settings: (
loader_settings: (
format: FromExtension,
),
saver_settings: (
generate_mipmaps: true,
),
),
),
)
```
This configures the asset to use the `LoadAndSave` processor, which runs
an AssetLoader and feeds the result into an AssetSaver (which saves the
given Asset and defines a loader to load it with). (for terseness
LoadAndSave will likely get a shorter/friendlier type name when [Stable
Type Paths](#7184) lands). `LoadAndSave` is likely to be the most common
processor type, but arbitrary processors are supported.
`CompressedImageSaver` saves an `Image` in the Basis Universal format
and configures the ImageLoader to load it as basis universal. The
`AssetProcessor` will read this meta, run it through the LoadAndSave
processor, and write the basis-universal version of the image to
`.imported_assets`. The final metadata will look like this:
```rust
(
meta_format_version: "1.0",
processed_info: Some((
hash: 905599590923828066,
full_hash: 9948823010183819117,
process_dependencies: [],
)),
asset: Load(
loader: "bevy_render::texture::image_loader::ImageLoader",
settings: (
format: Format(Basis),
),
),
)
```
To try basis-universal processing out in Bevy examples, (for example
`sprite.rs`), change `add_plugins(DefaultPlugins)` to
`add_plugins(DefaultPlugins.set(AssetPlugin::processed_dev()))` and run
with the `basis-universal` feature enabled: `cargo run
--features=basis-universal --example sprite`.
To create a custom processor, there are two main paths:
1. Use the `LoadAndSave` processor with an existing `AssetLoader`.
Implement the `AssetSaver` trait, register the processor using
`asset_processor.register_processor::<LoadAndSave<ImageLoader,
CompressedImageSaver>>(image_saver.into())`.
2. Implement the `Process` trait directly and register it using:
`asset_processor.register_processor(thing_processor)`.
You can configure default processors for file extensions like this:
```rust
asset_processor.set_default_processor::<ThingProcessor>("thing")
```
There is one more metadata type to be aware of:
```rust
(
meta_format_version: "1.0",
asset: Ignore,
)
```
This will ignore the asset during processing / prevent it from being
written to `.imported_assets`.
The AssetProcessor stores a transaction log at `.imported_assets/log`
and uses it to gracefully recover from unexpected stops. This means you
can force-quit the processor (and Bevy Apps running the processor in
parallel) at arbitrary times!
`.imported_assets` is "local state". It should _not_ be checked into
source control. It should also be considered "read only". In practice,
you _can_ modify processed assets and processed metadata if you really
need to test something. But those modifications will not be represented
in the hashes of the assets, so the processed state will be "out of
sync" with the source assets. The processor _will not_ fix this for you.
Either revert the change after you have tested it, or delete the
processed files so they can be re-populated.
## Open Questions
There are a number of open questions to be discussed. We should decide
if they need to be addressed in this PR and if so, how we will address
them:
### Implied Dependencies vs Dependency Enumeration
There are currently two ways to populate asset dependencies:
* **Implied via AssetLoaders**: if an AssetLoader loads an asset (and
retrieves a handle), a dependency is added to the list.
* **Explicit via the optional Asset::visit_dependencies**: if
`server.load_asset(my_asset)` is called, it will call
`my_asset.visit_dependencies`, which will grab dependencies that have
been manually defined for the asset via the Asset trait impl (which can
be derived).
This means that defining explicit dependencies is optional for "loaded
assets". And the list of dependencies is always accurate because loaders
can only produce Handles if they register dependencies. If an asset was
loaded with an AssetLoader, it only uses the implied dependencies. If an
asset was created at runtime and added with
`asset_server.load_asset(MyAsset)`, it will use
`Asset::visit_dependencies`.
However this can create a behavior mismatch between loaded assets and
equivalent "created at runtime" assets if `Assets::visit_dependencies`
doesn't exactly match the dependencies produced by the AssetLoader. This
behavior mismatch can be resolved by completely removing "implied loader
dependencies" and requiring `Asset::visit_dependencies` to supply
dependency data. But this creates two problems:
* It makes defining loaded assets harder and more error prone: Devs must
remember to manually annotate asset dependencies with `#[dependency]`
when deriving `Asset`. For more complicated assets (such as scenes), the
derive likely wouldn't be sufficient and a manual `visit_dependencies`
impl would be required.
* Removes the ability to immediately kick off dependency loads: When
AssetLoaders retrieve a Handle, they also immediately kick off an asset
load for the handle, which means it can start loading in parallel
_before_ the asset finishes loading. For large assets, this could be
significant. (although this could be mitigated for processed assets if
we store dependencies in the processed meta file and load them ahead of
time)
### Eager ProcessorDev Asset Loading
I made a controversial call in the interest of fast startup times ("time
to first pixel") for the "processor dev mode configuration". When
initializing the AssetProcessor, current processed versions of unchanged
assets are yielded immediately, even if their dependencies haven't been
checked yet for reprocessing. This means that
non-current-state-of-filesystem-but-previously-valid assets might be
returned to the App first, then hot-reloaded if/when their dependencies
change and the asset is reprocessed.
Is this behavior desirable? There is largely one alternative: do not
yield an asset from the processor to the app until all of its
dependencies have been checked for changes. In some common cases (load
dependency has not changed since last run) this will increase startup
time. The main question is "by how much" and is that slower startup time
worth it in the interest of only yielding assets that are true to the
current state of the filesystem. Should this be configurable? I'm
starting to think we should only yield an asset after its (historical)
dependencies have been checked for changes + processed as necessary, but
I'm curious what you all think.
### Paths Are Currently The Only Canonical ID / Do We Want Asset UUIDs?
In this implementation AssetPaths are the only canonical asset
identifier (just like the previous Bevy Asset system and Godot). Moving
assets will result in re-scans (and currently reprocessing, although
reprocessing can easily be avoided with some changes). Asset
renames/moves will break code and assets that rely on specific paths,
unless those paths are fixed up.
Do we want / need "stable asset uuids"? Introducing them is very
possible:
1. Generate a UUID and include it in .meta files
2. Support UUID in AssetPath
3. Generate "asset indices" which are loaded on startup and map UUIDs to
paths.
4 (maybe). Consider only supporting UUIDs for processed assets so we can
generate quick-to-load indices instead of scanning meta files.
The main "pro" is that assets referencing UUIDs don't need to be
migrated when a path changes. The main "con" is that UUIDs cannot be
"lazily resolved" like paths. They need a full view of all assets to
answer the question "does this UUID exist". Which means UUIDs require
the AssetProcessor to fully finish startup scans before saying an asset
doesnt exist. And they essentially require asset pre-processing to use
in apps, because scanning all asset metadata files at runtime to resolve
a UUID is not viable for medium-to-large apps. It really requires a
pre-generated UUID index, which must be loaded before querying for
assets.
I personally think this should be investigated in a separate PR. Paths
aren't going anywhere ... _everyone_ uses filesystems (and
filesystem-like apis) to manage their asset source files. I consider
them permanent canonical asset information. Additionally, they behave
well for both processed and unprocessed asset modes. Given that Bevy is
supporting both, this feels like the right canonical ID to start with.
UUIDS (and maybe even other indexed-identifier types) can be added later
as necessary.
### Folder / File Naming Conventions
All asset processing config currently lives in the `.imported_assets`
folder. The processor transaction log is in `.imported_assets/log`.
Processed assets are added to `.imported_assets/Default`, which will
make migrating to processed asset profiles (ex: a
`.imported_assets/Mobile` profile) a non-breaking change. It also allows
us to create top-level files like `.imported_assets/log` without it
being interpreted as an asset. Meta files currently have a `.meta`
suffix. Do we like these names and conventions?
### Should the `AssetPlugin::processed_dev` configuration enable
`watch_for_changes` automatically?
Currently it does (which I think makes sense), but it does make it the
only configuration that enables watch_for_changes by default.
### Discuss on_loaded High Level Interface:
This PR includes a very rough "proof of concept" `on_loaded` system
adapter that uses the `LoadedWithDependencies` event in combination with
`asset_server.load_asset` dependency tracking to support this pattern
```rust
fn main() {
App::new()
.init_asset::<MyAssets>()
.add_systems(Update, on_loaded(create_array_texture))
.run();
}
#[derive(Asset, Clone)]
struct MyAssets {
#[dependency]
picture_of_my_cat: Handle<Image>,
#[dependency]
picture_of_my_other_cat: Handle<Image>,
}
impl FromWorld for ArrayTexture {
fn from_world(world: &mut World) -> Self {
picture_of_my_cat: server.load("meow.png"),
picture_of_my_other_cat: server.load("meeeeeeeow.png"),
}
}
fn spawn_cat(In(my_assets): In<MyAssets>, mut commands: Commands) {
commands.spawn(SpriteBundle {
texture: my_assets.picture_of_my_cat.clone(),
..default()
});
commands.spawn(SpriteBundle {
texture: my_assets.picture_of_my_other_cat.clone(),
..default()
});
}
```
The implementation is _very_ rough. And it is currently unsafe because
`bevy_ecs` doesn't expose some internals to do this safely from inside
`bevy_asset`. There are plenty of unanswered questions like:
* "do we add a Loadable" derive? (effectively automate the FromWorld
implementation above)
* Should `MyAssets` even be an Asset? (largely implemented this way
because it elegantly builds on `server.load_asset(MyAsset { .. })`
dependency tracking).
We should think hard about what our ideal API looks like (and if this is
a pattern we want to support). Not necessarily something we need to
solve in this PR. The current `on_loaded` impl should probably be
removed from this PR before merging.
## Clarifying Questions
### What about Assets as Entities?
This Bevy Asset V2 proposal implementation initially stored Assets as
ECS Entities. Instead of `AssetId<T>` + the `Assets<T>` resource it used
`Entity` as the asset id and Asset values were just ECS components.
There are plenty of compelling reasons to do this:
1. Easier to inline assets in Bevy Scenes (as they are "just" normal
entities + components)
2. More flexible queries: use the power of the ECS to filter assets (ex:
`Query<Mesh, With<Tree>>`).
3. Extensible. Users can add arbitrary component data to assets.
4. Things like "component visualization tools" work out of the box to
visualize asset data.
However Assets as Entities has a ton of caveats right now:
* We need to be able to allocate entity ids without a direct World
reference (aka rework id allocator in Entities ... i worked around this
in my prototypes by just pre allocating big chunks of entities)
* We want asset change events in addition to ECS change tracking ... how
do we populate them when mutations can come from anywhere? Do we use
Changed queries? This would require iterating over the change data for
all assets every frame. Is this acceptable or should we implement a new
"event based" component change detection option?
* Reconciling manually created assets with asset-system managed assets
has some nuance (ex: are they "loaded" / do they also have that
component metadata?)
* "how do we handle "static" / default entity handles" (ties in to the
Entity Indices discussion:
https://github.com/bevyengine/bevy/discussions/8319). This is necessary
for things like "built in" assets and default handles in things like
SpriteBundle.
* Storing asset information as a component makes it easy to "invalidate"
asset state by removing the component (or forcing modifications).
Ideally we have ways to lock this down (some combination of Rust type
privacy and ECS validation)
In practice, how we store and identify assets is a reasonably
superficial change (porting off of Assets as Entities and implementing
dedicated storage + ids took less than a day). So once we sort out the
remaining challenges the flip should be straightforward. Additionally, I
do still have "Assets as Entities" in my commit history, so we can reuse
that work. I personally think "assets as entities" is a good endgame,
but it also doesn't provide _significant_ value at the moment and it
certainly isn't ready yet with the current state of things.
### Why not Distill?
[Distill](https://github.com/amethyst/distill) is a high quality fully
featured asset system built in Rust. It is very natural to ask "why not
just use Distill?".
It is also worth calling out that for awhile, [we planned on adopting
Distill / I signed off on
it](https://github.com/bevyengine/bevy/issues/708).
However I think Bevy has a number of constraints that make Distill
adoption suboptimal:
* **Architectural Simplicity:**
* Distill's processor requires an in-memory database (lmdb) and RPC
networked API (using Cap'n Proto). Each of these introduces API
complexity that increases maintenance burden and "code grokability".
Ignoring tests, documentation, and examples, Distill has 24,237 lines of
Rust code (including generated code for RPC + database interactions). If
you ignore generated code, it has 11,499 lines.
* Bevy builds the AssetProcessor and AssetServer using pluggable
AssetReader/AssetWriter Rust traits with simple io interfaces. They do
not necessitate databases or RPC interfaces (although Readers/Writers
could use them if that is desired). Bevy Asset V2 (at the time of
writing this PR) is 5,384 lines of Rust code (ignoring tests,
documentation, and examples). Grain of salt: Distill does have more
features currently (ex: Asset Packing, GUIDS, remote-out-of-process
asset processor). I do plan to implement these features in Bevy Asset V2
and I personally highly doubt they will meaningfully close the 6115
lines-of-code gap.
* This complexity gap (which while illustrated by lines of code, is much
bigger than just that) is noteworthy to me. Bevy should be hackable and
there are pillars of Distill that are very hard to understand and
extend. This is a matter of opinion (and Bevy Asset V2 also has
complicated areas), but I think Bevy Asset V2 is much more approachable
for the average developer.
* Necessary disclaimer: counting lines of code is an extremely rough
complexity metric. Read the code and form your own opinions.
* **Optional Asset Processing:** Not all Bevy Apps (or Bevy App
developers) need / want asset preprocessing. Processing increases the
complexity of the development environment by introducing things like
meta files, imported asset storage, running processors in the
background, waiting for processing to finish, etc. Distill _requires_
preprocessing to work. With Bevy Asset V2 processing is fully opt-in.
The AssetServer isn't directly aware of asset processors at all.
AssetLoaders only care about converting bytes to runtime Assets ... they
don't know or care if the bytes were pre-processed or not. Processing is
"elegantly" (forgive my self-congratulatory phrasing) layered on top and
builds on the existing Asset system primitives.
* **Direct Filesystem Access to Processed Asset State:** Distill stores
processed assets in a database. This makes debugging / inspecting the
processed outputs harder (either requires special tooling to query the
database or they need to be "deployed" to be inspected). Bevy Asset V2,
on the other hand, stores processed assets in the filesystem (by default
... this is configurable). This makes interacting with the processed
state more natural. Note that both Godot and Unity's new asset system
store processed assets in the filesystem.
* **Portability**: Because Distill's processor uses lmdb and RPC
networking, it cannot be run on certain platforms (ex: lmdb is a
non-rust dependency that cannot run on the web, some platforms don't
support running network servers). Bevy should be able to process assets
everywhere (ex: run the Bevy Editor on the web, compile + process
shaders on mobile, etc). Distill does partially mitigate this problem by
supporting "streaming" assets via the RPC protocol, but this is not a
full solve from my perspective. And Bevy Asset V2 can (in theory) also
stream assets (without requiring RPC, although this isn't implemented
yet)
Note that I _do_ still think Distill would be a solid asset system for
Bevy. But I think the approach in this PR is a better solve for Bevy's
specific "asset system requirements".
### Doesn't async-fs just shim requests to "sync" `std::fs`? What is the
point?
"True async file io" has limited / spotty platform support. async-fs
(and the rust async ecosystem generally ... ex Tokio) currently use
async wrappers over std::fs that offload blocking requests to separate
threads. This may feel unsatisfying, but it _does_ still provide value
because it prevents our task pools from blocking on file system
operations (which would prevent progress when there are many tasks to
do, but all threads in a pool are currently blocking on file system
ops).
Additionally, using async APIs for our AssetReaders and AssetWriters
also provides value because we can later add support for "true async
file io" for platforms that support it. _And_ we can implement other
"true async io" asset backends (such as networked asset io).
## Draft TODO
- [x] Fill in missing filesystem event APIs: file removed event (which
is expressed as dangling RenameFrom events in some cases), file/folder
renamed event
- [x] Assets without loaders are not moved to the processed folder. This
breaks things like referenced `.bin` files for GLTFs. This should be
configurable per-non-asset-type.
- [x] Initial implementation of Reflect and FromReflect for Handle. The
"deserialization" parity bar is low here as this only worked with static
UUIDs in the old impl ... this is a non-trivial problem. Either we add a
Handle::AssetPath variant that gets "upgraded" to a strong handle on
scene load or we use a separate AssetRef type for Bevy scenes (which is
converted to a runtime Handle on load). This deserves its own discussion
in a different pr.
- [x] Populate read_asset_bytes hash when run by the processor (a bit of
a special case .. when run by the processor the processed meta will
contain the hash so we don't need to compute it on the spot, but we
don't want/need to read the meta when run by the main AssetServer)
- [x] Delay hot reloading: currently filesystem events are handled
immediately, which creates timing issues in some cases. For example hot
reloading images can sometimes break because the image isn't finished
writing. We should add a delay, likely similar to the [implementation in
this PR](https://github.com/bevyengine/bevy/pull/8503).
- [x] Port old platform-specific AssetIo implementations to the new
AssetReader interface (currently missing Android and web)
- [x] Resolve on_loaded unsafety (either by removing the API entirely or
removing the unsafe)
- [x] Runtime loader setting overrides
- [x] Remove remaining unwraps that should be error-handled. There are
number of TODOs here
- [x] Pretty AssetPath Display impl
- [x] Document more APIs
- [x] Resolve spurious "reloading because it has changed" events (to
repro run load_gltf with `processed_dev()`)
- [x] load_dependency hot reloading currently only works for processed
assets. If processing is disabled, load_dependency changes are not hot
reloaded.
- [x] Replace AssetInfo dependency load/fail counters with
`loading_dependencies: HashSet<UntypedAssetId>` to prevent reloads from
(potentially) breaking counters. Storing this will also enable
"dependency reloaded" events (see [Next Steps](#next-steps))
- [x] Re-add filesystem watcher cargo feature gate (currently it is not
optional)
- [ ] Migration Guide
- [ ] Changelog
## Followup TODO
- [ ] Replace "eager unchanged processed asset loading" behavior with
"don't returned unchanged processed asset until dependencies have been
checked".
- [ ] Add true `Ignore` AssetAction that does not copy the asset to the
imported_assets folder.
- [ ] Finish "live asset unloading" (ex: free up CPU asset memory after
uploading an image to the GPU), rethink RenderAssets, and port renderer
features. The `Assets` collection uses `Option<T>` for asset storage to
support its removal. (1) the Option might not actually be necessary ...
might be able to just remove from the collection entirely (2) need to
finalize removal apis
- [ ] Try replacing the "channel based" asset id recycling with
something a bit more efficient (ex: we might be able to use raw atomic
ints with some cleverness)
- [ ] Consider adding UUIDs to processed assets (scoped just to helping
identify moved assets ... not exposed to load queries ... see [Next
Steps](#next-steps))
- [ ] Store "last modified" source asset and meta timestamps in
processed meta files to enable skipping expensive hashing when the file
wasn't changed
- [ ] Fix "slow loop" handle drop fix
- [ ] Migrate to TypeName
- [x] Handle "loader preregistration". See #9429
## Next Steps
* **Configurable per-type defaults for AssetMeta**: It should be
possible to add configuration like "all png image meta should default to
using nearest sampling" (currently this hard-coded per-loader/processor
Settings::default() impls). Also see the "Folder Meta" bullet point.
* **Avoid Reprocessing on Asset Renames / Moves**: See the "canonical
asset ids" discussion in [Open Questions](#open-questions) and the
relevant bullet point in [Draft TODO](#draft-todo). Even without
canonical ids, folder renames could avoid reprocessing in some cases.
* **Multiple Asset Sources**: Expand AssetPath to support "asset source
names" and support multiple AssetReaders in the asset server (ex:
`webserver://some_path/image.png` backed by an Http webserver
AssetReader). The "default" asset reader would use normal
`some_path/image.png` paths. Ideally this works in combination with
multiple AssetWatchers for hot-reloading
* **Stable Type Names**: this pr removes the TypeUuid requirement from
assets in favor of `std::any::type_name`. This makes defining assets
easier (no need to generate a new uuid / use weird proc macro syntax).
It also makes reading meta files easier (because things have "friendly
names"). We also use type names for components in scene files. If they
are good enough for components, they are good enough for assets. And
consistency across Bevy pillars is desirable. However,
`std::any::type_name` is not guaranteed to be stable (although in
practice it is). We've developed a [stable type
path](https://github.com/bevyengine/bevy/pull/7184) to resolve this,
which should be adopted when it is ready.
* **Command Line Interface**: It should be possible to run the asset
processor in a separate process from the command line. This will also
require building a network-server-backed AssetReader to communicate
between the app and the processor. We've been planning to build a "bevy
cli" for awhile. This seems like a good excuse to build it.
* **Asset Packing**: This is largely an additive feature, so it made
sense to me to punt this until we've laid the foundations in this PR.
* **Per-Platform Processed Assets**: It should be possible to generate
assets for multiple platforms by supporting multiple "processor
profiles" per asset (ex: compress with format X on PC and Y on iOS). I
think there should probably be arbitrary "profiles" (which can be
separate from actual platforms), which are then assigned to a given
platform when generating the final asset distribution for that platform.
Ex: maybe devs want a "Mobile" profile that is shared between iOS and
Android. Or a "LowEnd" profile shared between web and mobile.
* **Versioning and Migrations**: Assets, Loaders, Savers, and Processors
need to have versions to determine if their schema is valid. If an asset
/ loader version is incompatible with the current version expected at
runtime, the processor should be able to migrate them. I think we should
try using Bevy Reflect for this, as it would allow us to load the old
version as a dynamic Reflect type without actually having the old Rust
type. It would also allow us to define "patches" to migrate between
versions (Bevy Reflect devs are currently working on patching). The
`.meta` file already has its own format version. Migrating that to new
versions should also be possible.
* **Real Copy-on-write AssetPaths**: Rust's actual Cow (clone-on-write
type) currently used by AssetPath can still result in String clones that
aren't actually necessary (cloning an Owned Cow clones the contents).
Bevy's asset system requires cloning AssetPaths in a number of places,
which result in actual clones of the internal Strings. This is not
efficient. AssetPath internals should be reworked to exhibit truer
cow-like-behavior that reduces String clones to the absolute minimum.
* **Consider processor-less processing**: In theory the AssetServer
could run processors "inline" even if the background AssetProcessor is
disabled. If we decide this is actually desirable, we could add this.
But I don't think its a priority in the short or medium term.
* **Pre-emptive dependency loading**: We could encode dependencies in
processed meta files, which could then be used by the Asset Server to
kick of dependency loads as early as possible (prior to starting the
actual asset load). Is this desirable? How much time would this save in
practice?
* **Optimize Processor With UntypedAssetIds**: The processor exclusively
uses AssetPath to identify assets currently. It might be possible to
swap these out for UntypedAssetIds in some places, which are smaller /
cheaper to hash and compare.
* **One to Many Asset Processing**: An asset source file that produces
many assets currently must be processed into a single "processed" asset
source. If labeled assets can be written separately they can each have
their own configured savers _and_ they could be loaded more granularly.
Definitely worth exploring!
* **Automatically Track "Runtime-only" Asset Dependencies**: Right now,
tracking "created at runtime" asset dependencies requires adding them
via `asset_server.load_asset(StandardMaterial::default())`. I think with
some cleverness we could also do this for
`materials.add(StandardMaterial::default())`, making tracking work
"everywhere". There are challenges here relating to change detection /
ensuring the server is made aware of dependency changes. This could be
expensive in some cases.
* **"Dependency Changed" events**: Some assets have runtime artifacts
that need to be re-generated when one of their dependencies change (ex:
regenerate a material's bind group when a Texture needs to change). We
are generating the dependency graph so we can definitely produce these
events. Buuuuut generating these events will have a cost / they could be
high frequency for some assets, so we might want this to be opt-in for
specific cases.
* **Investigate Storing More Information In Handles**: Handles can now
store arbitrary information, which makes it cheaper and easier to
access. How much should we move into them? Canonical asset load states
(via atomics)? (`handle.is_loaded()` would be very cool). Should we
store the entire asset and remove the `Assets<T>` collection?
(`Arc<RwLock<Option<Image>>>`?)
* **Support processing and loading files without extensions**: This is a
pretty arbitrary restriction and could be supported with very minimal
changes.
* **Folder Meta**: It would be nice if we could define per folder
processor configuration defaults (likely in a `.meta` or `.folder_meta`
file). Things like "default to linear filtering for all Images in this
folder".
* **Replace async_broadcast with event-listener?** This might be
approximately drop-in for some uses and it feels more light weight
* **Support Running the AssetProcessor on the Web**: Most of the hard
work is done here, but there are some easy straggling TODOs (make the
transaction log an interface instead of a direct file writer so we can
write a web storage backend, implement an AssetReader/AssetWriter that
reads/writes to something like LocalStorage).
* **Consider identifying and preventing circular dependencies**: This is
especially important for "processor dependencies", as processing will
silently never finish in these cases.
* **Built-in/Inlined Asset Hot Reloading**: This PR regresses
"built-in/inlined" asset hot reloading (previously provided by the
DebugAssetServer). I'm intentionally punting this because I think it can
be cleanly implemented with "multiple asset sources" by registering a
"debug asset source" (ex: `debug://bevy_pbr/src/render/pbr.wgsl` asset
paths) in combination with an AssetWatcher for that asset source and
support for "manually loading pats with asset bytes instead of
AssetReaders". The old DebugAssetServer was quite nasty and I'd love to
avoid that hackery going forward.
* **Investigate ways to remove double-parsing meta files**: Parsing meta
files currently involves parsing once with "minimal" versions of the
meta file to extract the type name of the loader/processor config, then
parsing again to parse the "full" meta. This is suboptimal. We should be
able to define custom deserializers that (1) assume the loader/processor
type name comes first (2) dynamically looks up the loader/processor
registrations to deserialize settings in-line (similar to components in
the bevy scene format). Another alternative: deserialize as dynamic
Reflect objects and then convert.
* **More runtime loading configuration**: Support using the Handle type
as a hint to select an asset loader (instead of relying on AssetPath
extensions)
* **More high level Processor trait implementations**: For example, it
might be worth adding support for arbitrary chains of "asset transforms"
that modify an in-memory asset representation between loading and
saving. (ex: load a Mesh, run a `subdivide_mesh` transform, followed by
a `flip_normals` transform, then save the mesh to an efficient
compressed format).
* **Bevy Scene Handle Deserialization**: (see the relevant [Draft TODO
item](#draft-todo) for context)
* **Explore High Level Load Interfaces**: See [this
discussion](#discuss-on_loaded-high-level-interface) for one prototype.
* **Asset Streaming**: It would be great if we could stream Assets (ex:
stream a long video file piece by piece)
* **ID Exchanging**: In this PR Asset Handles/AssetIds are bigger than
they need to be because they have a Uuid enum variant. If we implement
an "id exchanging" system that trades Uuids for "efficient runtime ids",
we can cut down on the size of AssetIds, making them more efficient.
This has some open design questions, such as how to spawn entities with
"default" handle values (as these wouldn't have access to the exchange
api in the current system).
* **Asset Path Fixup Tooling**: Assets that inline asset paths inside
them will break when an asset moves. The asset system provides the
functionality to detect when paths break. We should build a framework
that enables formats to define "path migrations". This is especially
important for scene files. For editor-generated files, we should also
consider using UUIDs (see other bullet point) to avoid the need to
migrate in these cases.
---------
Co-authored-by: BeastLe9enD <beastle9end@outlook.de>
Co-authored-by: Mike <mike.hsu@gmail.com>
Co-authored-by: Nicola Papale <nicopap@users.noreply.github.com>
# Objective
- The current `EventReader::iter` has been determined to cause confusion
among new Bevy users. It was suggested by @JoJoJet to rename the method
to better clarify its usage.
- Solves #9624
## Solution
- Rename `EventReader::iter` to `EventReader::read`.
- Rename `EventReader::iter_with_id` to `EventReader::read_with_id`.
- Rename `ManualEventReader::iter` to `ManualEventReader::read`.
- Rename `ManualEventReader::iter_with_id` to
`ManualEventReader::read_with_id`.
---
## Changelog
- `EventReader::iter` has been renamed to `EventReader::read`.
- `EventReader::iter_with_id` has been renamed to
`EventReader::read_with_id`.
- `ManualEventReader::iter` has been renamed to
`ManualEventReader::read`.
- `ManualEventReader::iter_with_id` has been renamed to
`ManualEventReader::read_with_id`.
- Deprecated `EventReader::iter`
- Deprecated `EventReader::iter_with_id`
- Deprecated `ManualEventReader::iter`
- Deprecated `ManualEventReader::iter_with_id`
## Migration Guide
- Existing usages of `EventReader::iter` and `EventReader::iter_with_id`
will have to be changed to `EventReader::read` and
`EventReader::read_with_id` respectively.
- Existing usages of `ManualEventReader::iter` and
`ManualEventReader::iter_with_id` will have to be changed to
`ManualEventReader::read` and `ManualEventReader::read_with_id`
respectively.
# Objective
- Fixes#9533
## Solution
* Added `Val::ZERO` as a constant which is defined as `Val::Px(0.)`.
* Added manual `PartialEq` implementation for `Val` which allows any
zero value to equal any other zero value. E.g., `Val::Px(0.) ==
Val::Percent(0.)` etc. This is technically a breaking change, as
`Val::Px(0.) == Val::Percent(0.)` now equals `true` instead of `false`
(as an example)
* Replaced instances of `Val::Px(0.)`, `Val::Percent(0.)`, etc. with
`Val::ZERO`
* Fixed `bevy_ui::layout::convert::tests::test_convert_from` test to
account for Taffy not equating `Points(0.)` and `Percent(0.)`. These
tests now use `assert_eq!(...)` instead of `assert!(matches!(...))`
which gives easier to diagnose error messages.
# Objective
[Rust 1.72.0](https://blog.rust-lang.org/2023/08/24/Rust-1.72.0.html) is
now stable.
# Notes
- `let-else` formatting has arrived!
- I chose to allow `explicit_iter_loop` due to
https://github.com/rust-lang/rust-clippy/issues/11074.
We didn't hit any of the false positives that prevent compilation, but
fixing this did produce a lot of the "symbol soup" mentioned, e.g. `for
image in &mut *image_events {`.
Happy to undo this if there's consensus the other way.
---------
Co-authored-by: François <mockersf@gmail.com>
# Objective
- Some examples crash in CI because of needing too many resources for
the windows runner
- Some examples have random results making it hard to compare
screenshots
## Solution
- `bloom_3d`: reduce the number of spheres
- `pbr`: use simpler spheres and reuse the mesh
- `tonemapping`: use simpler spheres and reuse the mesh
- `shadow_biases`: reduce the number of spheres
- `spotlight`: use a seeded rng, move more cubes in view while reducing
the total number of cubes, and reuse meshes and materials
- `external_source_external_thread`, `iter_combinations`,
`parallel_query`: use a seeded rng
Examples of errors encountered:
```
Caused by:
In Device::create_bind_group
note: label = `bloom_upsampling_bind_group`
Not enough memory left
```
```
Caused by:
In Queue::write_buffer
Parent device is lost
```
```
ERROR wgpu_core::device::life: Mapping failed Device(Lost)
```