# Objective
When the user renders multiple cameras to the same output texture, it
can sometimes be confusing what `ClearColorConfig` is necessary for each
camera to avoid overwriting the previous camera's output. This is
particular true in cases where the user uses mixed HDR cameras, which
means that their scene is being rendered to different internal textures.
## Solution
When a view has a configured viewport, set the GPU scissor in the
upscaling node so we don't overwrite areas that were written to by other
cameras.
## Testing
Ran the `split_screen` example.
# Objective
Fixes#14120
`ui_texture_slice` and `ui_texture_atlas_slice` were working as
intended, so undo the changes.
## Solution
Partially revert https://github.com/bevyengine/bevy/pull/14115 for
`ui_texture_slice` and `ui_texture_atlas_slice`.
## Testing
Ran those two examples, confirmed the border color is the thing that
changes when buttons are hovered.
As reported in #14004, many third-party plugins, such as Hanabi, enqueue
entities that don't have meshes into render phases. However, the
introduction of indirect mode added a dependency on mesh-specific data,
breaking this workflow. This is because GPU preprocessing requires that
the render phases manage indirect draw parameters, which don't apply to
objects that aren't meshes. The existing code skips over binned entities
that don't have indirect draw parameters, which causes the rendering to
be skipped for such objects.
To support this workflow, this commit adds a new field,
`non_mesh_items`, to `BinnedRenderPhase`. This field contains a simple
list of (bin key, entity) pairs. After drawing batchable and unbatchable
objects, the non-mesh items are drawn one after another. Bevy itself
doesn't enqueue any items into this list; it exists solely for the
application and/or plugins to use.
Additionally, this commit switches the asset ID in the standard bin keys
to be an untyped asset ID rather than that of a mesh. This allows more
flexibility, allowing bins to be keyed off any type of asset.
This patch adds a new example, `custom_phase_item`, which simultaneously
serves to demonstrate how to use this new feature and to act as a
regression test so this doesn't break again.
Fixes#14004.
## Changelog
### Added
* `BinnedRenderPhase` now contains a `non_mesh_items` field for plugins
to add custom items to.
# Objective
- #14017 changed how `UiImage` and `BackgroundColor` work
- one change was missed in example `color_grading`, another in the
mobile example
## Solution
- Change it in the examples
# Objective
In Bevy 0.13, `BackgroundColor` simply tinted the image of any
`UiImage`. This was confusing: in every other case (e.g. Text), this
added a solid square behind the element. #11165 changed this, but
removed `BackgroundColor` from `ImageBundle` to avoid confusion, since
the semantic meaning had changed.
However, this resulted in a serious UX downgrade / inconsistency, as
this behavior was no longer part of the bundle (unlike for `TextBundle`
or `NodeBundle`), leaving users with a relatively frustrating upgrade
path.
Additionally, adding both `BackgroundColor` and `UiImage` resulted in a
bizarre effect, where the background color was seemingly ignored as it
was covered by a solid white placeholder image.
Fixes#13969.
## Solution
Per @viridia's design:
> - if you don't specify a background color, it's transparent.
> - if you don't specify an image color, it's white (because it's a
multiplier).
> - if you don't specify an image, no image is drawn.
> - if you specify both a background color and an image color, they are
independent.
> - the background color is drawn behind the image (in whatever pixels
are transparent)
As laid out by @benfrankel, this involves:
1. Changing the default `UiImage` to use a transparent texture but a
pure white tint.
2. Adding `UiImage::solid_color` to quickly set placeholder images.
3. Changing the default `BorderColor` and `BackgroundColor` to
transparent.
4. Removing the default overrides for these values in the other assorted
UI bundles.
5. Adding `BackgroundColor` back to `ImageBundle` and `ButtonBundle`.
6. Adding a 1x1 `Image::transparent`, which can be accessed from
`Assets<Image>` via the `TRANSPARENT_IMAGE_HANDLE` constant.
Huge thanks to everyone who helped out with the design in the linked
issue and [the Discord
thread](https://discord.com/channels/691052431525675048/1255209923890118697/1255209999278280844):
this was very much a joint design.
@cart helped me figure out how to set the UiImage's default texture to a
transparent 1x1 image, which is a much nicer fix.
## Testing
I've checked the examples modified by this PR, and the `ui` example as
well just to be sure.
## Migration Guide
- `BackgroundColor` no longer tints the color of images in `ImageBundle`
or `ButtonBundle`. Set `UiImage::color` to tint images instead.
- The default texture for `UiImage` is now a transparent white square.
Use `UiImage::solid_color` to quickly draw debug images.
- The default value for `BackgroundColor` and `BorderColor` is now
transparent. Set the color to white manually to return to previous
behavior.
# Objective
The documentation for
[`Transform::align`](https://docs.rs/bevy/0.14.0-rc.3/bevy/transform/components/struct.Transform.html#method.align)
mentions a hypothetical ship model. Showing this concretely would be a
nice improvement over using a cube.
> For example, if a spaceship model has its nose pointing in the
X-direction in its own local coordinates and its dorsal fin pointing in
the Y-direction, then align(Dir3::X, v, Dir3::Y, w) will make the
spaceship’s nose point in the direction of v, while the dorsal fin does
its best to point in the direction w.
## Solution
This commit makes the ship less hypothetical by using a kenney ship
model in the example.
The local axes for the ship needed to change to accommodate the gltf, so
the hypothetical in the documentation and this example's local axes
don't necessarily match. Docs use `align(Dir3::X, v, Dir3::Y, w)` and
this example now uses `(Vec3::NEG_Z, *first, Vec3::X, *second)`.
I manually modified the `craft_speederD` Node's `translation` to be
0,0,0 in the gltf file, which means it now differs from kenney's
original model.
Original ship from: https://kenney.nl/assets/space-kit
## Testing
```
cargo run --example align
```
![screenshot-2024-06-19-at-14 27
05@2x](https://github.com/bevyengine/bevy/assets/551247/ab1afc8f-76b2-42b6-b455-f0d1c77cfed7)
![screenshot-2024-06-19-at-14 27
12@2x](https://github.com/bevyengine/bevy/assets/551247/4a01031c-4ea1-43ab-8078-3656db67efe0)
![screenshot-2024-06-19-at-14 27
20@2x](https://github.com/bevyengine/bevy/assets/551247/06830f38-ba2b-4e3a-a265-2d10f9ea9de9)
# Objective
Fixes#13920
## Solution
As described in the issue.
## Testing
Moved a custom transition plugin in example before any of the app-state
methods.
# Objective
A very common way to organize a first-person view is to split it into
two kinds of models:
- The *view model* is the model that represents the player's body.
- The *world model* is everything else.
The reason for this distinction is that these two models should be
rendered with different FOVs.
The view model is typically designed and animated with a very specific
FOV in mind, so it is
generally *fixed* and cannot be changed by a player. The world model, on
the other hand, should
be able to change its FOV to accommodate the player's preferences for
the following reasons:
- *Accessibility*: How prone is the player to motion sickness? A wider
FOV can help.
- *Tactical preference*: Does the player want to see more of the
battlefield?
Or have a more zoomed-in view for precision aiming?
- *Physical considerations*: How well does the in-game FOV match the
player's real-world FOV?
Are they sitting in front of a monitor or playing on a TV in the living
room? How big is the screen?
## Solution
I've added an example implementing the described setup as follows.
The `Player` is an entity holding two cameras, one for each model. The
view model camera has a fixed
FOV of 70 degrees, while the world model camera has a variable FOV that
can be changed by the player.
I use different `RenderLayers` to select what to render.
- The world model camera has no explicit `RenderLayers` component, so it
uses the layer 0.
All static objects in the scene are also on layer 0 for the same reason.
- The view model camera has a `RenderLayers` component with layer 1, so
it only renders objects
explicitly assigned to layer 1. The arm of the player is one such
object.
The order of the view model camera is additionally bumped to 1 to ensure
it renders on top of the world model.
- The light source in the scene must illuminate both the view model and
the world model, so it is
assigned to both layers 0 and 1.
To better see the effect, the player can move the camera by dragging
their mouse and change the world model's FOV with the arrow keys. The
arrow up key maps to "decrease FOV" and the arrow down key maps to
"increase FOV". This sounds backwards on paper, but is more intuitive
when actually changing the FOV in-game since a decrease in FOV looks
like a zoom-in.
I intentionally do not allow changing the view model's FOV even though
it would be illustrative because that would be an anti-pattern and bloat
the code a bit.
The example is called `first_person_view_model` and not just
`first_person` because I want to highlight that this is not a simple
flycam, but actually renders the player.
## Testing
Default FOV:
<img width="1392" alt="image"
src="https://github.com/bevyengine/bevy/assets/9047632/8c2e804f-fac2-48c7-8a22-d85af999dfb2">
Decreased FOV:
<img width="1392" alt="image"
src="https://github.com/bevyengine/bevy/assets/9047632/1733b3e5-f583-4214-a454-3554e3cbd066">
Increased FOV:
<img width="1392" alt="image"
src="https://github.com/bevyengine/bevy/assets/9047632/0b0640e6-5743-46f6-a79a-7181ba9678e8">
Note that the white bar on the right represents the player's arm, which
is more obvious in-game because you can move the camera around.
The box on top is there to make sure that the view model is receiving
shadows.
I tested only on macOS.
---
## Changelog
I don't think new examples go in here, do they?
## Caveat
The solution used here was implemented with help by @robtfm on
[Discord](https://discord.com/channels/691052431525675048/866787577687310356/1241019224491561000):
> shadow maps are specific to lights, not to layers
> if you want shadows from some meshes that are not visible, you could
have light on layer 1+2, meshes on layer 2, camera on layer 1 (for
example)
> but this might change in future, it's not exactly an intended feature
In other words, the example code as-is is not guaranteed to work in the
future. I want to bring this up because the use-case presented here is
extremely common in first-person games and important for accessibility.
It would be good to have a blessed and easy way of how to achieve it.
I'm also not happy about how I get the `perspective` variable in
`change_fov`. Very open to suggestions :)
## Related issues
- Addresses parts of #12658
- Addresses parts of #12588
---------
Co-authored-by: Pascal Hertleif <killercup@gmail.com>
# Objective
- Observers example is using an unseeded random, prints text to console
and doesn't display text as other examples
## Solution
- use seeded random
- log instead of printing
- use common settings for UI text
# Objective
- Fixes#13825
## Solution
- Cherry picked and fixed non-trivial conflicts to be able to merge
#10839 into the 0.14 release branch.
Link to PR: https://github.com/bevyengine/bevy/pull/10839
Co-authored-by: James O'Brien <james.obrien@drafly.net>
Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
Co-authored-by: MiniaczQ <xnetroidpl@gmail.com>
Co-authored-by: Carter Anderson <mcanders1@gmail.com>
# Objective
While learning about shaders and pipelines, I found this example to be
misleading; it wasn't clear to me how the node knew what the correct
"instance" of `PostProcessSettings` we should send to the shader (as the
combination of `ExtractComponentPlugin` and `UniformComponentPlugin`
extracts + sends _all_ of our `PostProcessSetting` components to the
GPU).
The goal of this PR is to clarify how to target the view specific
`PostProcessSettings` in the shader when there are multiple cameras.
## Solution
To accomplish this, we can use a dynamic uniform buffer for
`PostProcessSettings`, querying for the relevant `DynamicUniformIndex`
in the `PostProcessNode` to get the relevant index to use with the bind
group.
While the example in its current state is _correct_, I believe that fact
that it's intended to showcase a per camera post processing effect
warrants a dynamic uniform buffer (even though in the context of this
example we have only one camera, and therefore no adverse behaviour).
## Testing
- Run the `post_processing` example before and after this change,
verifying they behave the same.
## Reviewer notes
This is my first PR to Bevy, and I'm by no means an expert in the world
of rendering (though I'm trying to learn all I can). If there's a better
way to do this / a reason not to take this route, I'd love to hear it!
Thanks in advance.
# Objective
- Add a new example showcasing how to add custom primitives and what you
can do with them.
## Solution
- Added a new example `custom_primitives` with a 2D heart shape
primitive highlighting
- `Bounded2d` by implementing and visualising bounding shapes,
- `Measured2d` by implementing it,
- `Meshable` to show the shape on the screen
- The example also includes an `Extrusion<Heart>` implementing
- `Measured3d`,
- `Bounded3d` using the `BoundedExtrusion` trait and
- meshing using the `Extrudable` trait.
## Additional information
Here are two images of the heart and its extrusion:
![image_2024-06-10_194631194](https://github.com/bevyengine/bevy/assets/62256001/53f1836c-df74-4ba6-85e9-fabdafa94c66)
![Screenshot 2024-06-10
194609](https://github.com/bevyengine/bevy/assets/62256001/b1630e71-6e94-4293-b7b5-da8d9cc98faf)
---------
Co-authored-by: Jakub Marcowski <37378746+Chubercik@users.noreply.github.com>
# Objective
Fixes#13711
## Solution
Introduce smaller, generic system sets for each schedule variant, which
are ordered against other generic variants:
- `ExitSchedules<S>` - For `OnExit` schedules, runs from leaf states to
root states.
- `TransitionSchedules<S>` - For `OnTransition` schedules, runs in
arbitrary order.
- `EnterSchedules<S>` - For `OnEnter` schedules, runs from root states
to leaf states.
Also unified `ApplyStateTransition<S>` schedule which works in basically
the same way, just for internals.
## Testing
- One test that tests schedule execution order
---------
Co-authored-by: Lee-Orr <lee-orr@users.noreply.github.com>
# Objective
This PR addresses the 2D part of #12658. I plan to separate the examples
and make one PR per camera example.
## Solution
Added a new top-down example composed of:
- [x] Player keyboard movements
- [x] UI for keyboard instructions
- [x] Colors and bloom effect to see the movement of the player
- [x] Camera smooth movement towards the player (lerp)
## Testing
```bash
cargo run --features="wayland,bevy/dynamic_linking" --example 2d_top_down_camera
```
https://github.com/bevyengine/bevy/assets/10638479/95db0587-e5e0-4f55-be11-97444b795793
# Objective
- Where possible, it's recommended to use `BufferVec` over
`encase::StorageBuffer` for performance reason. It doesn't really matter
for the example, but it's still important to teach the better solution.
## Solution
- Use BufferVec in the gpu_readback example
# Objective
- Implement `Meshable` for `Extrusion<T>`
## Solution
- `Meshable` requires `Meshable::Output: MeshBuilder` now. This means
that all `some_primitive.mesh()` calls now return a `MeshBuilder`. These
were added for primitives that did not have one prior to this.
- A new trait `Extrudable: MeshBuilder` has been added. This trait
allows you to specify the indices of the perimeter of the mesh created
by this `MeshBuilder` and whether they are to be shaded smooth or flat.
- `Extrusion<P: Primitive2d + Meshable>` is now `Meshable` aswell. The
associated `MeshBuilder` is `ExtrusionMeshBuilder` which is generic over
`P` and uses the `MeshBuilder` of its baseshape internally.
- `ExtrusionMeshBuilder` exposes the configuration functions of its
base-shapes builder.
- Updated the `3d_shapes` example to include `Extrusion`s
## Migration Guide
- Depending on the context, you may need to explicitly call
`.mesh().build()` on primitives where you have previously called
`.mesh()`
- The `Output` type of custom `Meshable` implementations must now derive
`MeshBuilder`.
## Additional information
- The extrusions UVs are done so that
- the front face (`+Z`) is in the area between `(0, 0)` and `(0.5,
0.5)`,
- the back face (`-Z`) is in the area between `(0.5, 0)` and `(1, 0.5)`
- the mantle is in the area between `(0, 0.5)` and `(1, 1)`. Each
`PerimeterSegment` you specified in the `Extrudable` implementation will
be allocated an equal portion of this area.
- The UVs of the base shape are scaled to be in the front/back area so
whatever method of filling the full UV-space the base shape used is how
these areas will be filled.
Here is an example of what that looks like on a capsule:
https://github.com/bevyengine/bevy/assets/62256001/425ad288-fbbc-4634-9d3f-5e846cdce85f
This is the texture used:
![extrusion
uvs](https://github.com/bevyengine/bevy/assets/62256001/4e54e421-bfda-44b9-8571-412525cebddf)
The `3d_shapes` example now looks like this:
![image_2024-05-22_235915753](https://github.com/bevyengine/bevy/assets/62256001/3d8bc86d-9ed1-47f2-899a-27aac0a265dd)
---------
Co-authored-by: Lynn Büttgenbach <62256001+solis-lumine-vorago@users.noreply.github.com>
Co-authored-by: Matty <weatherleymatthew@gmail.com>
Co-authored-by: Matty <2975848+mweatherley@users.noreply.github.com>
This commit implements a large subset of [*subpixel morphological
antialiasing*], better known as SMAA. SMAA is a 2011 antialiasing
technique that detects jaggies in an aliased image and smooths them out.
Despite its age, it's been a continual staple of games for over a
decade. Four quality presets are available: *low*, *medium*, *high*, and
*ultra*. I set the default to *high*, on account of modern GPUs being
significantly faster than they were in 2011.
Like the already-implemented FXAA, SMAA works on an unaliased image.
Unlike FXAA, it requires three passes: (1) edge detection; (2) blending
weight calculation; (3) neighborhood blending. Each of the first two
passes writes an intermediate texture for use by the next pass. The
first pass also writes to a stencil buffer in order to dramatically
reduce the number of pixels that the second pass has to examine. Also
unlike FXAA, two built-in lookup textures are required; I bundle them
into the library in compressed KTX2 format.
The [reference implementation of SMAA] is in HLSL, with abundant use of
preprocessor macros to achieve GLSL compatibility. Unfortunately, the
reference implementation predates WGSL by over a decade, so I had to
translate the HLSL to WGSL manually. As much as was reasonably possible
without sacrificing readability, I tried to translate line by line,
preserving comments, both to aid reviewing and to allow patches to the
HLSL to more easily apply to the WGSL. Most of SMAA's features are
supported, but in the interests of making this patch somewhat less huge,
I skipped a few of the more exotic ones:
* The temporal variant is currently unsupported. This is and has been
used in shipping games, so supporting temporal SMAA would be useful
follow-up work. It would, however, require some significant work on TAA
to ensure compatibility, so I opted to skip it in this patch.
* Depth- and chroma-based edge detection are unimplemented; only luma
is. Depth is lower-quality, but faster; chroma is higher-quality, but
slower. Luma is the suggested default edge detection algorithm. (Note
that depth-based edge detection wouldn't work on WebGL 2 anyway, because
of the Naga bug whereby depth sampling is miscompiled in GLSL. This is
the same bug that prevents depth of field from working on that
platform.)
* Predicated thresholding is currently unsupported.
* My implementation is incompatible with SSAA and MSAA, unlike the
original; MSAA must be turned off to use SMAA in Bevy. I believe this
feature was rarely used in practice.
The `anti_aliasing` example has been updated to allow experimentation
with and testing of the different SMAA quality presets. Along the way, I
refactored the example's help text rendering code a bit to eliminate
code repetition.
SMAA is fully supported on WebGL 2.
Fixes#9819.
[*subpixel morphological antialiasing*]: https://www.iryoku.com/smaa/
[reference implementation of SMAA]: https://github.com/iryoku/smaa
## Changelog
### Added
* Subpixel morphological antialiasing, or SMAA, is now available. To use
it, add the `SmaaSettings` component to your `Camera`.
![Screenshot 2024-05-18
134311](https://github.com/bevyengine/bevy/assets/157897/ffbd611c-1b32-4491-b2e2-e410688852ee)
---------
Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
# Objective
This PR addresses one of the issues from [discord state
discussion](https://discord.com/channels/691052431525675048/1237949214017716356).
Same-state transitions can be desirable, so there should exist a hook
for them.
Fixes https://github.com/bevyengine/bevy/issues/9130.
## Solution
- Allow `StateTransitionEvent<S>` to contain identity transitions.
- Ignore identity transitions at schedule running level (`OnExit`,
`OnTransition`, `OnEnter`).
- Propagate identity transitions through `SubStates` and
`ComputedStates`.
- Add example about registering custom transition schedules.
## Changelog
- `StateTransitionEvent<S>` can be emitted with same `exited` and
`entered` state.
---------
Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
# Objective
- With the recent winit update, touchpad specific events can also be
triggered on mobile
## Solution
- Rename them to gestures and add support for the new ones
## Testing
- Tested on the mobile example on iOS
https://github.com/bevyengine/bevy/assets/8672791/da4ed23f-ff0a-41b2-9dcd-726e8546bef2
## Migration Guide
- `TouchpadMagnify` has been renamed to `PinchGesture`
- `TouchpadRotate` has been renamed to `RotationGesture `
---------
Co-authored-by: mike <ramirezmike2@gmail.com>
# Objective
Move `StateScoped` and `log_transitions` to `bevy_state`, since they're
useful for end users.
Addresses #12852, although not in the way the issue had in mind.
## Solution
- Added `bevy_hierarchy` to default features of `bevy_state`.
- Move `log_transitions` to `transitions` module.
- Move `StateScoped` to `state_scoped` module, gated behind
`bevy_hierarchy` feature.
- Refreshed implementation.
- Added `enable_state_coped_entities<S: States>()` to add required
machinery to `App` for clearing state-scoped entities.
## Changelog
- Added `log_transitions` for displaying state transitions.
- Added `StateScoped` for binding entity lifetime to state and app
`enable_state_coped_entities` to register cleaning behavior.
---------
Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
Co-authored-by: François Mockers <francois.mockers@vleue.com>
We want to use the clustering infrastructure for light probes and decals
as well, not just point lights. This patch builds on top of #13640 and
performs the rename.
To make this series easier to review, this patch makes no code changes.
Only identifiers and comments are modified.
## Migration Guide
* In the PBR shaders, `point_lights` is now known as
`clusterable_objects`, `PointLight` is now known as `ClusterableObject`,
and `cluster_light_index_lists` is now known as
`clusterable_object_index_lists`.
This commit implements support for physically-based anisotropy in Bevy's
`StandardMaterial`, following the specification for the
[`KHR_materials_anisotropy`] glTF extension.
[*Anisotropy*] (not to be confused with [anisotropic filtering]) is a
PBR feature that allows roughness to vary along the tangent and
bitangent directions of a mesh. In effect, this causes the specular
light to stretch out into lines instead of a round lobe. This is useful
for modeling brushed metal, hair, and similar surfaces. Support for
anisotropy is a common feature in major game and graphics engines;
Unity, Unreal, Godot, three.js, and Blender all support it to varying
degrees.
Two new parameters have been added to `StandardMaterial`:
`anisotropy_strength` and `anisotropy_rotation`. Anisotropy strength,
which ranges from 0 to 1, represents how much the roughness differs
between the tangent and the bitangent of the mesh. In effect, it
controls how stretched the specular highlight is. Anisotropy rotation
allows the roughness direction to differ from the tangent of the model.
In addition to these two fixed parameters, an *anisotropy texture* can
be supplied. Such a texture should be a 3-channel RGB texture, where the
red and green values specify a direction vector using the same
conventions as a normal map ([0, 1] color values map to [-1, 1] vector
values), and the the blue value represents the strength. This matches
the format that the [`KHR_materials_anisotropy`] specification requires.
Such textures should be loaded as linear and not sRGB. Note that this
texture does consume one additional texture binding in the standard
material shader.
The glTF loader has been updated to properly parse the
`KHR_materials_anisotropy` extension.
A new example, `anisotropy`, has been added. This example loads and
displays the barn lamp example from the [`glTF-Sample-Assets`]
repository. Note that the textures were rather large, so I shrunk them
down and converted them to a mixture of JPEG and KTX2 format, in the
interests of saving space in the Bevy repository.
[*Anisotropy*]:
https://google.github.io/filament/Filament.md.html#materialsystem/anisotropicmodel
[anisotropic filtering]:
https://en.wikipedia.org/wiki/Anisotropic_filtering
[`KHR_materials_anisotropy`]:
https://github.com/KhronosGroup/glTF/blob/main/extensions/2.0/Khronos/KHR_materials_anisotropy/README.md
[`glTF-Sample-Assets`]:
https://github.com/KhronosGroup/glTF-Sample-Assets/
## Changelog
### Added
* Physically-based anisotropy is now available for materials, which
enhances the look of surfaces such as brushed metal or hair. glTF scenes
can use the new feature with the `KHR_materials_anisotropy` extension.
## Screenshots
With anisotropy:
![Screenshot 2024-05-20
233414](https://github.com/bevyengine/bevy/assets/157897/379f1e42-24e9-40b6-a430-f7d1479d0335)
Without anisotropy:
![Screenshot 2024-05-20
233420](https://github.com/bevyengine/bevy/assets/157897/aa220f05-b8e7-417c-9671-b242d4bf9fc4)
# Objective
- Fixes#10909
- Fixes#8492
## Solution
- Name all matrices `x_from_y`, for example `world_from_view`.
## Testing
- I've tested most of the 3D examples. The `lighting` example
particularly should hit a lot of the changes and appears to run fine.
---
## Changelog
- Renamed matrices across the engine to follow a `y_from_x` naming,
making the space conversion more obvious.
## Migration Guide
- `Frustum`'s `from_view_projection`, `from_view_projection_custom_far`
and `from_view_projection_no_far` were renamed to
`from_clip_from_world`, `from_clip_from_world_custom_far` and
`from_clip_from_world_no_far`.
- `ComputedCameraValues::projection_matrix` was renamed to
`clip_from_view`.
- `CameraProjection::get_projection_matrix` was renamed to
`get_clip_from_view` (this affects implementations on `Projection`,
`PerspectiveProjection` and `OrthographicProjection`).
- `ViewRangefinder3d::from_view_matrix` was renamed to
`from_world_from_view`.
- `PreviousViewData`'s members were renamed to `view_from_world` and
`clip_from_world`.
- `ExtractedView`'s `projection`, `transform` and `view_projection` were
renamed to `clip_from_view`, `world_from_view` and `clip_from_world`.
- `ViewUniform`'s `view_proj`, `unjittered_view_proj`,
`inverse_view_proj`, `view`, `inverse_view`, `projection` and
`inverse_projection` were renamed to `clip_from_world`,
`unjittered_clip_from_world`, `world_from_clip`, `world_from_view`,
`view_from_world`, `clip_from_view` and `view_from_clip`.
- `GpuDirectionalCascade::view_projection` was renamed to
`clip_from_world`.
- `MeshTransforms`' `transform` and `previous_transform` were renamed to
`world_from_local` and `previous_world_from_local`.
- `MeshUniform`'s `transform`, `previous_transform`,
`inverse_transpose_model_a` and `inverse_transpose_model_b` were renamed
to `world_from_local`, `previous_world_from_local`,
`local_from_world_transpose_a` and `local_from_world_transpose_b` (the
`Mesh` type in WGSL mirrors this, however `transform` and
`previous_transform` were named `model` and `previous_model`).
- `Mesh2dTransforms::transform` was renamed to `world_from_local`.
- `Mesh2dUniform`'s `transform`, `inverse_transpose_model_a` and
`inverse_transpose_model_b` were renamed to `world_from_local`,
`local_from_world_transpose_a` and `local_from_world_transpose_b` (the
`Mesh2d` type in WGSL mirrors this).
- In WGSL, in `bevy_pbr::mesh_functions`, `get_model_matrix` and
`get_previous_model_matrix` were renamed to `get_world_from_local` and
`get_previous_world_from_local`.
- In WGSL, `bevy_sprite::mesh2d_functions::get_model_matrix` was renamed
to `get_world_from_local`.
# Objective
- Add GizmoBuilders for some primitives as discussed in #13233
## Solution
- `gizmos.primitive_2d(CIRCLE)` and `gizmos.primitive_2d(ELLIPSE)` now
return `Ellipse2dBuilder` aswell.
- `gizmos.primitive_3d(SPHERE)` and `gizmos.sphere()` now return the
same `SphereBuilder`.
- the `.circle_segments` method on the `SphereBuilder` that used to be
returned by `.sphere()` is now called `.segments`
- the sphere primitive gizmo now matches the `gizmos.sphere` gizmo
- `gizmos.primitive_2d(ANNULUS)` now returns a `Annulus2dBuilder`
allowing the configuration of the `segments`
- gizmos cylinders and capsules now have only 1 line per axis, similar
to `gizmos.sphere`
## Migration Guide
- Some `gizmos.primitive_nd` methods now return some or different
builders. You may need to adjust types and match statements
- Replace any calls to `circle_segments()` with `.segments()`
---------
Co-authored-by: Raphael Büttgenbach <62256001+solis-lumine-vorago@users.noreply.github.com>
# Objective
- Plane subdivision was removed without providing an alternative
## Solution
- Add subdivision to the PlaneMeshBuilder
---
## Migration Guide
If you were using `Plane` `subdivisions`, you now need to use
`Plane3d::default().mesh().subdivisions(10)`
fixes https://github.com/bevyengine/bevy/issues/13258
Currently copypasting the example into a new project without also
copying "shaders/game_of_life.wgsl" gives an unhelpful blank screen.
This change makes it panic instead. I think nicer error handling is
outside scope of the example, and this is good enough to point out that
the shader code is missing.
# Objective
- fixes#4823
## Solution
As outlined in the discussion in the linked issue as the best current
solution, this PR adds specific GltfExtras for
- scenes
- meshes
- materials
- As it is , it is not a breaking change, I hesitated to rename the
current "GltfExtras" component to "PrimitiveGltfExtras", but that would
result in a breaking change and might be a bit confusing as to what
"primitive" that refers to.
## Testing
- I included a bare-bones example & asset (exported gltf file from
Blender) with gltf extras at all the relevant levels : scene, mesh,
material
---
## Changelog
- adds "SceneGltfExtras" injected at the scene level if any
- adds "MeshGltfExtras", injected at the mesh level if any
- adds "MaterialGltfExtras", injected at the mesh level if any: ie if a
mesh has a material that has gltf extras, the component will be injected
there.
# Objective
- Upgrade winit to v0.30
- Fixes https://github.com/bevyengine/bevy/issues/13331
## Solution
This is a rewrite/adaptation of the new trait system described and
implemented in `winit` v0.30.
## Migration Guide
The custom UserEvent is now renamed as WakeUp, used to wake up the loop
if anything happens outside the app (a new
[custom_user_event](https://github.com/bevyengine/bevy/pull/13366/files#diff-2de8c0a8d3028d0059a3d80ae31b2bbc1cde2595ce2d317ea378fe3e0cf6ef2d)
shows this behavior.
The internal `UpdateState` has been removed and replaced internally by
the AppLifecycle. When changed, the AppLifecycle is sent as an event.
The `UpdateMode` now accepts only two values: `Continuous` and
`Reactive`, but the latter exposes 3 new properties to enable reactive
to device, user or window events. The previous `UpdateMode::Reactive` is
now equivalent to `UpdateMode::reactive()`, while
`UpdateMode::ReactiveLowPower` to `UpdateMode::reactive_low_power()`.
The `ApplicationLifecycle` has been renamed as `AppLifecycle`, and now
contains the possible values of the application state inside the event
loop:
* `Idle`: the loop has not started yet
* `Running` (previously called `Started`): the loop is running
* `WillSuspend`: the loop is going to be suspended
* `Suspended`: the loop is suspended
* `WillResume`: the loop is going to be resumed
Note: the `Resumed` state has been removed since the resumed app is just
running.
Finally, now that `winit` enables this, it extends the `WinitPlugin` to
support custom events.
## Test platforms
- [x] Windows
- [x] MacOs
- [x] Linux (x11)
- [x] Linux (Wayland)
- [x] Android
- [x] iOS
- [x] WASM/WebGPU
- [x] WASM/WebGL2
## Outstanding issues / regressions
- [ ] iOS: build failed in CI
- blocking, but may just be flakiness
- [x] Cross-platform: when the window is maximised, changes in the scale
factor don't apply, to make them apply one has to make the window
smaller again. (Re-maximising keeps the updated scale factor)
- non-blocking, but good to fix
- [ ] Android: it's pretty easy to quickly open and close the app and
then the music keeps playing when suspended.
- non-blocking but worrying
- [ ] Web: the application will hang when switching tabs
- Not new, duplicate of https://github.com/bevyengine/bevy/issues/13486
- [ ] Cross-platform?: Screenshot failure, `ERROR present_frames:
wgpu_core::present: No work has been submitted for this frame before`
taking the first screenshot, but after pressing space
- non-blocking, but good to fix
---------
Co-authored-by: François <francois.mockers@vleue.com>
# Objective
Fixes#13606.
Also Fixes#13614.
## Solution
Added the missing trait impls, and made `gizmos.arc_2d()` work with a
counter-clockwise angle.
## Testing
- Updated the render_primitives example, and it works.
# Objective
- Followup to #13548
- It added a list of all possible labels to documentation. This seems
hard to keep up and doesn't stop people from making spelling mistake
## Solution
- Add an enum that can create all the labels possible, and encourage its
use rather than manually typed labels
---------
Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
Co-authored-by: Rob Parrett <robparrett@gmail.com>
# Objective
- The default font size is too small to be useful in examples or for
debug text.
- Fixes#13587
## Solution
- Updated the default font size value in `TextStyle` from 12px to 24px.
- Resorted to Text defaults in examples to use the default font size in
most of them.
## Testing
- WIP
---
## Migration Guide
- The default font size has been increased to 24px from 12px. Make sure
you set the font to the appropriate values in places you were using
`Default` text style.
# Objective
Unifies the naming convention between `StateTransitionEvent<S>` and
transition schedules.
## Migration Guide
- `StateTransitionEvent<S>` and `OnTransition<S>` schedule had their
fields renamed to `exited` and `entered` to match schedules.
# Objective
- In #13542 I broke example `color_grading`: the UI is not updated to
reflect changed camera settings
- Fixes#13590.
## Solution
- Update the UI when changing color grading
## Objective
Use the "standard" text size / placement for the new text in these
examples.
Continuation of an effort started here:
https://github.com/bevyengine/bevy/pull/8478
This is definitely not comprehensive. I did the ones that were easy to
find and relatively straightforward updates. I meant to just do
`3d_shapes` and `2d_shapes`, but one thing lead to another.
## Solution
Use `font_size: 20.0`, the default (built-in) font, `Color::WHITE`
(default), and `Val::Px(12.)` from the edges of the screen.
There are a few little drive-by cleanups of defaults not being used,
etc.
## Testing
Ran the changed examples, verified that they still look reasonable.
# Objective
- In particularly dark scenes, auto-exposure would lead to an unexpected
darkening of the view.
- Fixes#13446.
## Solution
The average luminance should default to something else than 0.0 instead,
when there are no samples. We set it to `settings.min_log_lum`.
## Testing
I was able to reproduce the problem on the `auto_exposure` example by
setting the point light intensity to 2000 and looking into the
right-hand corner. There was a sudden darkening.
Now, the discontinuity is gone.
---------
Co-authored-by: Alice Cecile <alice.i.cecil@gmail.com>
Co-authored-by: Bram Buurlage <brambuurlage@gmail.com>
# Objective
- Fixes#13521
## Solution
Set `ambient_intensity` to 0.0 in volumetric_fog example.
I chose setting it explicitly over changing the default in order to make
it clear that this needs to be set depending on whether you have an
`EnvironmentMapLight`. See documentation for `ambient_intensity` and
related members.
## Testing
- Run the volumetric_fog example and notice how the light shown in
#13521 is not there anymore, as expected.
---------
Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
# Objective
- Small improvements on the `color_grading` example
## Solution
- Simplify button creation by creating them in the default state, the
selected one is automatically selected
- Don't update the UI if not needed
- Also invert the border of the selected button
- Simplify text update
# Objective
- Show + Visually Test that 3D primitive sampling works
- Make an example that looks nice.
## Solution
- Added a `sampling_primitives` examples which shows all the 3D
primitives being sampled, with a firefly aesthetic.
![image](https://github.com/bevyengine/bevy/assets/27301845/f882438b-2c72-48b1-a6e9-162a80c4273e)
## Testing
- `cargo run --example sampling_primitives`
- Haven't tested WASM.
## Changelog
### Added
- Added a new example, `sampling_primitives`, to showcase all the 3D
sampleable primitives.
## Additional notes:
This example borrowed a bunch of code from the other sampling example,
by @mweatherley.
In future updates this example should be updated with new 3D primitives
as they become sampleable.
---------
Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
Co-authored-by: Joona Aalto <jondolf.dev@gmail.com>
# Objective
Fixes#13427.
## Solution
I changed the traits, and updated all usages.
## Testing
The `render_primitives` example still works perfectly.
---
## Changelog
- Made `gizmos.primitive_2d()` and `gizmos.primitive_3d()` take the
primitives by ref.
## Migration Guide
- Any usages of `gizmos.primitive_2d()` and/or `gizmos.primitive_3d()`
need to be updated to pass the primitive in by reference.
# Objective
We introduced a bunch of neat random sampling stuff in this release; we
should do a good job of showing people how to use it, and writing
examples is part of this.
## Solution
A new Math example, `random_sampling`, shows off the `ShapeSample` API
functionality. For the moment, it renders a cube and allows the user to
sample points from its interior or boundary in sets of either 1 or 100:
<img width="1440" alt="Screenshot 2024-05-25 at 1 16 08 PM"
src="https://github.com/bevyengine/bevy/assets/2975848/9cb6f53f-c89a-42c2-8907-b11d294c402a">
On the level of code, these are reflected by two ways of using
`ShapeSample`:
```rust
// Get a single random Vec3:
let sample: Vec3 = match *mode {
Mode::Interior => shape.0.sample_interior(rng),
Mode::Boundary => shape.0.sample_boundary(rng),
};
```
```rust
// Get 100 random Vec3s:
let samples: Vec<Vec3> = match *mode {
Mode::Interior => {
let dist = shape.0.interior_dist();
dist.sample_iter(&mut rng).take(100).collect()
}
Mode::Boundary => {
let dist = shape.0.boundary_dist();
dist.sample_iter(&mut rng).take(100).collect()
}
};
```
## Testing
Run the example!
## Discussion
Maybe in the future it would be nice to show off all of the different
shapes that we have implemented `ShapeSample` for, but I wanted to start
just by demonstrating the functionality. Here, I chose a cube because
it's simple and because it looks good rendered transparently with
backface culling disabled.
This commit, a revamp of #12959, implements screen-space reflections
(SSR), which approximate real-time reflections based on raymarching
through the depth buffer and copying samples from the final rendered
frame. This patch is a relatively minimal implementation of SSR, so as
to provide a flexible base on which to customize and build in the
future. However, it's based on the production-quality [raymarching code
by Tomasz
Stachowiak](https://gist.github.com/h3r2tic/9c8356bdaefbe80b1a22ae0aaee192db).
For a general basic overview of screen-space reflections, see
[1](https://lettier.github.io/3d-game-shaders-for-beginners/screen-space-reflection.html).
The raymarching shader uses the basic algorithm of tracing forward in
large steps, refining that trace in smaller increments via binary
search, and then using the secant method. No temporal filtering or
roughness blurring, is performed at all; for this reason, SSR currently
only operates on very shiny surfaces. No acceleration via the
hierarchical Z-buffer is implemented (though note that
https://github.com/bevyengine/bevy/pull/12899 will add the
infrastructure for this). Reflections are traced at full resolution,
which is often considered slow. All of these improvements and more can
be follow-ups.
SSR is built on top of the deferred renderer and is currently only
supported in that mode. Forward screen-space reflections are possible
albeit uncommon (though e.g. *Doom Eternal* uses them); however, they
require tracing from the previous frame, which would add complexity.
This patch leaves the door open to implementing SSR in the forward
rendering path but doesn't itself have such an implementation.
Screen-space reflections aren't supported in WebGL 2, because they
require sampling from the depth buffer, which Naga can't do because of a
bug (`sampler2DShadow` is incorrectly generated instead of `sampler2D`;
this is the same reason why depth of field is disabled on that
platform).
To add screen-space reflections to a camera, use the
`ScreenSpaceReflectionsBundle` bundle or the
`ScreenSpaceReflectionsSettings` component. In addition to
`ScreenSpaceReflectionsSettings`, `DepthPrepass` and `DeferredPrepass`
must also be present for the reflections to show up. The
`ScreenSpaceReflectionsSettings` component contains several settings
that artists can tweak, and also comes with sensible defaults.
A new example, `ssr`, has been added. It's loosely based on the
[three.js ocean
sample](https://threejs.org/examples/webgl_shaders_ocean.html), but all
the assets are original. Note that the three.js demo has no screen-space
reflections and instead renders a mirror world. In contrast to #12959,
this demo tests not only a cube but also a more complex model (the
flight helmet).
## Changelog
### Added
* Screen-space reflections can be enabled for very smooth surfaces by
adding the `ScreenSpaceReflections` component to a camera. Deferred
rendering must be enabled for the reflections to appear.
![Screenshot 2024-05-18
143555](https://github.com/bevyengine/bevy/assets/157897/b8675b39-8a89-433e-a34e-1b9ee1233267)
![Screenshot 2024-05-18
143606](https://github.com/bevyengine/bevy/assets/157897/cc9e1cd0-9951-464a-9a08-e589210e5606)
# Objective
- Create a new 2D primitive, Rhombus, also knows as "Diamond Shape"
- Simplify the creation and handling of isometric projections
- Extend Bevy's arsenal of 2D primitives
## Testing
- New unit tests created in bevy_math/ primitives and bev_math/ bounding
- Tested translations, rotations, wireframe, bounding sphere, aabb and
creation parameters
---------
Co-authored-by: Luís Figueiredo <luispcfigueiredo@tecnico.ulisboa.pt>
# Objective
The `ConicalFrustum` primitive should support meshing.
## Solution
Implement meshing for the `ConicalFrustum` primitive. The implementation
is nearly identical to `Cylinder` meshing, but supports two radii.
The default conical frustum is equivalent to a cone with a height of 1
and a radius of 0.5, truncated at half-height.
![kuva](https://github.com/bevyengine/bevy/assets/57632562/b4cab136-ff55-4056-b818-1218e4f38845)