**Note:** This is an adoption of @Shfty 's adoption (#8131) of #3996!
All I've done is updated the branch and run the docs CI.
> **Note:** This is an adoption of #3996, originally authored by
@molikto
>
> # Objective
> Allow use of `wgpu::Features::SPIRV_SHADER_PASSTHROUGH` and the
corresponding `wgpu::Device::create_shader_module_spirv` for SPIR-V
shader assets.
>
> This enables use-cases where naga is not sufficient to load a given
(valid) SPIR-V module, i.e. cases where naga lacks support for a given
SPIR-V feature employed by a third-party codegen backend like
`rust-gpu`.
>
> ## Solution
> * Reimplemented the changes from [Spirv shader
bypass #3996](https://github.com/bevyengine/bevy/pull/3996), on account
of the original branch having been deleted.
> * Documented the new `spirv_shader_passthrough` feature flag with the
appropriate platform support context from [wgpu's
documentation](https://docs.rs/wgpu/latest/wgpu/struct.Features.html#associatedconstant.SPIRV_SHADER_PASSTHROUGH).
>
> ## Changelog
> * Adds a `spirv_shader_passthrough` feature flag to the following
crates:
>
> * `bevy`
> * `bevy_internal`
> * `bevy_render`
> * Extends `RenderDevice::create_shader_module` with a conditional call
to `wgpu::Device::create_shader_module_spirv` if
`spirv_shader_passthrough` is enabled and
`wgpu::Features::SPIRV_SHADER_PASSTHROUGH` is present for the current
platform.
> * Documents the relevant `wgpu` platform support in
`docs/cargo_features.md`
---------
Co-authored-by: Josh Palmer <1253239+Shfty@users.noreply.github.com>
Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
# Objective
Add examples for manipulating sprites and meshes by either mutating the
handle or direct manipulation of the asset, as described in #15056.
Closes#3130.
(The previous PR suffered a Git-tastrophe, and was unceremoniously
closed, sry! 😅 )
---------
Co-authored-by: Jan Hohenheim <jan@hohenheim.ch>
# Objective
> Rust 1.81 released the #[expect(...)] attribute, which works like
#[allow(...)] but throws a warning if the lint isn't raised. This is
preferred to #[allow(...)] because it tells us when it can be removed.
- Adopts the parts of #15118 that are complete, and updates the branch
so it can be merged.
- There were a few conflicts, let me know if I misjudged any of 'em.
Alice's
[recommendation](https://github.com/bevyengine/bevy/issues/15059#issuecomment-2349263900)
seems well-taken, let's do this crate by crate now that @BD103 has done
the lion's share of this!
(Relates to, but doesn't yet completely finish #15059.)
Crates this _doesn't_ cover:
- bevy_input
- bevy_gilrs
- bevy_window
- bevy_winit
- bevy_state
- bevy_render
- bevy_picking
- bevy_core_pipeline
- bevy_sprite
- bevy_text
- bevy_pbr
- bevy_ui
- bevy_gltf
- bevy_gizmos
- bevy_dev_tools
- bevy_internal
- bevy_dylib
---------
Co-authored-by: BD103 <59022059+BD103@users.noreply.github.com>
Co-authored-by: Ben Frankel <ben.frankel7@gmail.com>
Co-authored-by: Antony <antony.m.3012@gmail.com>
[*Percentage-closer soft shadows*] are a technique from 2004 that allow
shadows to become blurrier farther from the objects that cast them. It
works by introducing a *blocker search* step that runs before the normal
shadow map sampling. The blocker search step detects the difference
between the depth of the fragment being rasterized and the depth of the
nearby samples in the depth buffer. Larger depth differences result in a
larger penumbra and therefore a blurrier shadow.
To enable PCSS, fill in the `soft_shadow_size` value in
`DirectionalLight`, `PointLight`, or `SpotLight`, as appropriate. This
shadow size value represents the size of the light and should be tuned
as appropriate for your scene. Higher values result in a wider penumbra
(i.e. blurrier shadows).
When using PCSS, temporal shadow maps
(`ShadowFilteringMethod::Temporal`) are recommended. If you don't use
`ShadowFilteringMethod::Temporal` and instead use
`ShadowFilteringMethod::Gaussian`, Bevy will use the same technique as
`Temporal`, but the result won't vary over time. This produces a rather
noisy result. Doing better would likely require downsampling the shadow
map, which would be complex and slower (and would require PR #13003 to
land first).
In addition to PCSS, this commit makes the near Z plane for the shadow
map configurable on a per-light basis. Previously, it had been hardcoded
to 0.1 meters. This change was necessary to make the point light shadow
map in the example look reasonable, as otherwise the shadows appeared
far too aliased.
A new example, `pcss`, has been added. It demonstrates the
percentage-closer soft shadow technique with directional lights, point
lights, spot lights, non-temporal operation, and temporal operation. The
assets are my original work.
Both temporal and non-temporal shadows are rather noisy in the example,
and, as mentioned before, this is unavoidable without downsampling the
depth buffer, which we can't do yet. Note also that the shadows don't
look particularly great for point lights; the example simply isn't an
ideal scene for them. Nevertheless, I felt that the benefits of the
ability to do a side-by-side comparison of directional and point lights
outweighed the unsightliness of the point light shadows in that example,
so I kept the point light feature in.
Fixes#3631.
[*Percentage-closer soft shadows*]:
https://developer.download.nvidia.com/shaderlibrary/docs/shadow_PCSS.pdf
## Changelog
### Added
* Percentage-closer soft shadows (PCSS) are now supported, allowing
shadows to become blurrier as they stretch away from objects. To use
them, set the `soft_shadow_size` field in `DirectionalLight`,
`PointLight`, or `SpotLight`, as applicable.
* The near Z value for shadow maps is now customizable via the
`shadow_map_near_z` field in `DirectionalLight`, `PointLight`, and
`SpotLight`.
## Screenshots
PCSS off:
![Screenshot 2024-05-24
120012](https://github.com/bevyengine/bevy/assets/157897/0d35fe98-245b-44fb-8a43-8d0272a73b86)
PCSS on:
![Screenshot 2024-05-24
115959](https://github.com/bevyengine/bevy/assets/157897/83397ef8-1317-49dd-bfb3-f8286d7610cd)
---------
Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
Co-authored-by: Torstein Grindvik <52322338+torsteingrindvik@users.noreply.github.com>
# Objective
Applies feedback from previous PR #15135 'cause it got caught up in the
merge train 🚂
I couldn't resist including roll, both for completeness and due to
playing too many games that implemented it as a child.
cc: @janhohenheim
# Objective
Add examples for zooming (and orbiting) orthographic and perspective
cameras.
I'm pretty green with 3D, so please treat with suspicion! I note that
if/when #15075 is merged, `.scale` will go away so this example uses
`.scaling_mode`.
Closes#2580
### Builder changes
- Increased meshlet max vertices/triangles from 64v/64t to 255v/128t
(meshoptimizer won't allow 256v sadly). This gives us a much greater
percentage of meshlets with max triangle count (128). Still not perfect,
we still end up with some tiny <=10 triangle meshlets that never really
get simplified, but it's progress.
- Removed the error target limit. Now we allow meshoptimizer to simplify
as much as possible. No reason to cap this out, as the cluster culling
code will choose a good LOD level anyways. Again leads to higher quality
LOD trees.
- After some discussion and consulting the Nanite slides again, changed
meshlet group error from _adding_ the max child's error to the group
error, to doing `group_error = max(group_error, max_child_error)`. Error
is already cumulative between LODs as the edges we're collapsing during
simplification get longer each time.
- Bumped the 65% simplification threshold to allow up to 95% of the
original geometry (e.g. accept simplification as valid even if we only
simplified 5% of the triangles). This gives us closer to
log2(initial_meshlet_count) LOD levels, and fewer meshlet roots in the
DAG.
Still more work to be done in the future here. Maybe trying METIS for
meshlet building instead of meshoptimizer.
Using ~8 clusters per group instead of ~4 might also make a big
difference. The Nanite slides say that they have 8-32 meshlets per
group, suggesting some kind of heuristic. Unfortunately meshopt's
compute_cluster_bounds won't work with large groups atm
(https://github.com/zeux/meshoptimizer/discussions/750#discussioncomment-10562641)
so hard to test.
Based on discussion from
https://github.com/bevyengine/bevy/discussions/14998,
https://github.com/zeux/meshoptimizer/discussions/750, and discord.
### Runtime changes
- cluster:triangle packed IDs are now stored 25:7 instead of 26:6 bits,
as max triangles per cluster are now 128 instead of 64
- Hardware raster now spawns 128 * 3 vertices instead of 64 * 3 vertices
to account for the new max triangles limit
- Hardware raster now outputs NaN triangles (0 / 0) instead of
zero-positioned triangles for extra vertex invocations over the cluster
triangle count. Shouldn't really be a difference idt, but I did it
anyways.
- Software raster now does 128 threads per workgroup instead of 64
threads. Each thread now loads, projects, and caches a vertex (vertices
0-127), and then if needed does so again (vertices 128-254). Each thread
then rasterizes one of 128 triangles.
- Fixed a bug with `needs_dispatch_remap`. I had the condition backwards
in my last PR, I probably committed it by accident after testing the
non-default code path on my GPU.
# Objective
Fixes#15032
## Solution
Reimplement support for the `flip_x` and `flip_y` fields.
This doesn't flip the border geometry, I'm not really sure whether that
is desirable or not.
Also fixes a bug that was causing the side and center slices to tile
incorrectly.
### Testing
```
cargo run --example ui_texture_slice_flip_and_tile
```
## Showcase
<img width="787" alt="nearest"
src="https://github.com/user-attachments/assets/bc044bae-1748-42ba-92b5-0500c87264f6">
With tiling need to use nearest filtering to avoid bleeding between the
slices.
---------
Co-authored-by: Jan Hohenheim <jan@hohenheim.ch>
Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
This commit adds support for *masks* to the animation graph. A mask is a
set of animation targets (bones) that neither a node nor its descendants
are allowed to animate. Animation targets can be assigned one or more
*mask group*s, which are specific to a single graph. If a node masks out
any mask group that an animation target belongs to, animation curves for
that target will be ignored during evaluation.
The canonical use case for masks is to support characters holding
objects. Typically, character animations will contain hand animations in
the case that the character's hand is empty. (For example, running
animations may close a character's fingers into a fist.) However, when
the character is holding an object, the animation must be altered so
that the hand grips the object.
Bevy currently has no convenient way to handle this. The only workaround
that I can see is to have entirely separate animation clips for
characters' hands and bodies and keep them in sync, which is burdensome
and doesn't match artists' expectations from other engines, which all
effectively have support for masks. However, with mask group support,
this task is simple. We assign each hand to a mask group and parent all
character animations to a node. When a character grasps an object in
hand, we position the fingers as appropriate and then enable the mask
group for that hand in that node. This allows the character's animations
to run normally, while the object remains correctly attached to the
hand.
Note that even with this PR, we won't have support for running separate
animations for a character's hand and the rest of the character. This is
because we're missing additive blending: there's no way to combine the
two masked animations together properly. I intend that to be a follow-up
PR.
The major engines all have support for masks, though the workflow varies
from engine to engine:
* Unity has support for masks [essentially as implemented here], though
with layers instead of a tree. However, when using the Mecanim
("Humanoid") feature, precise control over bones is lost in favor of
predefined muscle groups.
* Unreal has a feature named [*layered blend per bone*]. This allows for
separate blend weights for different bones, effectively achieving masks.
I believe that the combination of blend nodes and masks make Bevy's
animation graph as expressible as that of Unreal, once we have support
for additive blending, though you may have to use more nodes than you
would in Unreal. Moreover, separating out the concepts of "blend weight"
and "which bones this node applies to" seems like a cleaner design than
what Unreal has.
* Godot's `AnimationTree` has the notion of [*blend filters*], which are
essentially the same as masks as implemented in this PR.
Additionally, this patch fixes a bug with weight evaluation whereby
weights weren't properly propagated down to grandchildren, because the
weight evaluation for a node only checked its parent's weight, not its
evaluated weight. I considered submitting this as a separate PR, but
given that this PR refactors that code entirely to support masks and
weights under a unified "evaluated node" concept, I simply included the
fix here.
A new example, `animation_masks`, has been added. It demonstrates how to
toggle masks on and off for specific portions of a skin.
This is part of #14395, but I'm going to defer closing that issue until
we have additive blending.
[essentially as implemented here]:
https://docs.unity3d.com/560/Documentation/Manual/class-AvatarMask.html
[*layered blend per bone*]:
https://dev.epicgames.com/documentation/en-us/unreal-engine/using-layered-animations-in-unreal-engine
[*blend filters*]:
https://docs.godotengine.org/en/stable/tutorials/animation/animation_tree.html
## Migration Guide
* The serialized format of animation graphs has changed with the
addition of animation masks. To upgrade animation graph RON files, add
`mask` and `mask_groups` fields as appropriate. (They can be safely set
to zero.)
Adds a new `Handle<Storage>` asset type that can be used as a render
asset, particularly for use with `AsBindGroup`.
Closes: #13658
# Objective
Allow users to create storage buffers in the main world without having
to access the `RenderDevice`. While this resource is technically
available, it's bad form to use in the main world and requires mixing
rendering details with main world code. Additionally, this makes storage
buffers easier to use with `AsBindGroup`, particularly in the following
scenarios:
- Sharing the same buffers between a compute stage and material shader.
We already have examples of this for storage textures (see game of life
example) and these changes allow a similar pattern to be used with
storage buffers.
- Preventing repeated gpu upload (see the previous easier to use `Vec`
`AsBindGroup` option).
- Allow initializing custom materials using `Default`. Previously, the
lack of a `Default` implement for the raw `wgpu::Buffer` type made
implementing a `AsBindGroup + Default` bound difficult in the presence
of buffers.
## Solution
Adds a new `Handle<Storage>` asset type that is prepared into a
`GpuStorageBuffer` render asset. This asset can either be initialized
with a `Vec<u8>` of properly aligned data or with a size hint. Users can
modify the underlying `wgpu::BufferDescriptor` to provide additional
usage flags.
## Migration Guide
The `AsBindGroup` `storage` attribute has been modified to reference the
new `Handle<Storage>` asset instead. Usages of Vec` should be converted
into assets instead.
---------
Co-authored-by: IceSentry <IceSentry@users.noreply.github.com>
# Objective
Add `bevy_picking` sprite backend as part of the `bevy_mod_picking`
upstreamening (#12365).
## Solution
More or less a copy/paste from `bevy_mod_picking`, with the changes
[here](https://github.com/aevyrie/bevy_mod_picking/pull/354). I'm
putting that link here since those changes haven't yet made it through
review, so should probably be reviewed on their own.
## Testing
I couldn't find any sprite-backend-specific tests in `bevy_mod_picking`
and unfortunately I'm not familiar enough with Bevy's testing patterns
to write tests for code that relies on windowing and input. I'm willing
to break the pointer hit system into testable blocks and add some more
modular tests if that's deemed important enough to block, otherwise I
can open an issue for adding tests as follow-up.
## Follow-up work
- More docs/tests
- Ignore pick events on transparent sprite pixels with potential opt-out
---------
Co-authored-by: Aevyrie <aevyrie@gmail.com>
# Objective
- Remove the `wgpu_trace` feature while still making it easy/possible to
record wgpu traces for debugging.
- Close#14725.
- Get a taste of the bevy codebase. :P
## Solution
This PR performs the above objective by removing the `wgpu_trace`
feature from all `Cargo.toml` files.
However, wgpu traces are still useful for debugging - but to record
them, you need to pass in a directory path to store the traces in. To
avoid forcing users into manually creating the renderer,
`bevy_render::settings::WgpuSettings` now has a `trace_path` field, so
that all of Bevy's automatic initialization can happen while still
allowing for tracing.
## Testing
- Did you test these changes? If so, how?
- I have tested these changes, but only via running `cargo run -p ci`. I
am hoping the Github Actions workflows will catch anything I missed.
- Are there any parts that need more testing?
- I do not believe so.
- How can other people (reviewers) test your changes? Is there anything
specific they need to know?
- If you want to test these changes, I have updated the debugging guide
(`docs/debugging.md`) section on WGPU Tracing.
- If relevant, what platforms did you test these changes on, and are
there any important ones you can't test?
- I ran the above command on a Windows 10 64-bit (x64) machine, using
the `stable-x86_64-pc-windows-msvc` toolchain. I do not have anything
set up for other platforms or targets (though I can't imagine this needs
testing on other platforms).
---
## Migration Guide
1. The `bevy/wgpu_trace`, `bevy_render/wgpu_trace`, and
`bevy_internal/wgpu_trace` features no longer exist. Remove them from
your `Cargo.toml`, CI, tooling, and what-not.
2. Follow the instructions in the updated `docs/debugging.md` file in
the repository, under the WGPU Tracing section.
Because of the changes made, you can now generate traces to any path,
rather than the hardcoded `%WorkspaceRoot%/wgpu_trace` (where
`%WorkspaceRoot%` is... the root of your crate's workspace) folder.
(If WGPU hasn't restored tracing functionality...) Do note that WGPU has
not yet restored tracing functionality. However, once it does, the above
should be sufficient to generate new traces.
---------
Co-authored-by: TrialDragon <31419708+TrialDragon@users.noreply.github.com>
# Objective
- The goal of this PR is to make it possible to move the density texture
of a `FogVolume` over time in order to create dynamic effects like fog
moving in the wind.
- You could theoretically move the `FogVolume` itself, but this is not
ideal, because the `FogVolume` AABB would eventually leave the area. If
you want an area to remain foggy while also creating the impression that
the fog is moving in the wind, a scrolling density texture is a better
solution.
## Solution
- The PR adds a `density_texture_offset` field to the `FogVolume`
component. This offset is in the UVW coordinates of the density texture,
meaning that a value of `(0.5, 0.0, 0.0)` moves the 3d texture by half
along the x-axis.
- Values above 1.0 are wrapped, a 1.5 offset is the same as a 0.5
offset. This makes it so that the density texture wraps around on the
other side, meaning that a repeating 3d noise texture can seamlessly
scroll forever. It also makes it easy to move the density texture over
time by simply increasing the offset every frame.
## Testing
- A `scrolling_fog` example has been added to demonstrate the feature.
It uses the offset to scroll a repeating 3d noise density texture to
create the impression of fog moving in the wind.
- The camera is looking at a pillar with the sun peaking behind it. This
highlights the effect the changing density has on the volumetric
lighting interactions.
- Temporal anti-aliasing combined with the `jitter` option of
`VolumetricFogSettings` is used to improve the quality of the effect.
---
## Showcase
https://github.com/user-attachments/assets/3aa50ebd-771c-4c99-ab5d-255c0c3be1a8
# Objective
Fixes#14782
## Solution
Enable the lint and fix all upcoming hints (`--fix`). Also tried to
figure out the false-positive (see review comment). Maybe split this PR
up into multiple parts where only the last one enables the lint, so some
can already be merged resulting in less many files touched / less
potential for merge conflicts?
Currently, there are some cases where it might be easier to read the
code with the qualifier, so perhaps remove the import of it and adapt
its cases? In the current stage it's just a plain adoption of the
suggestions in order to have a base to discuss.
## Testing
`cargo clippy` and `cargo run -p ci` are happy.
# Objective
fixes#14569
## Solution
added an example to the diagnostic examples and linked the code to the
docs of the diagnostic library itself.
## Testing
I tested locally on my laptop in a web browser. Looked fine. You are
able to collapse the whole "intro" part of the doc to get to the links
sooner (for those who may think that including the example code here is
annoying to scroll through)
I would like people to run ```cargo doc``` and go the bevy_diagnostic
page to see if they have any issues or suggestions.
---
## Showcase
<img width="1067" alt="Screenshot 2024-08-14 at 12 52 16"
src="https://github.com/user-attachments/assets/70b6c18a-0bb9-4656-ba53-c416f62c6116">
---------
Co-authored-by: dpeke <dpekelis@funstage.com>
Makes the newly merged picking usable for UI elements.
currently it both triggers the events, as well as sends them as throught
commands.trigger_targets. We should probably figure out if this is
needed for them all.
# Objective
Hooks up obserers and picking for a very simple example
## Solution
upstreamed the UI picking backend from bevy_mod_picking
## Testing
tested with the new example picking/simple_picking.rs
---
---------
Co-authored-by: Lixou <82600264+DasLixou@users.noreply.github.com>
Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
Co-authored-by: Kristoffer Søholm <k.soeholm@gmail.com>
# Objective
This PR is a follow-up to #14703. I forgot to also add the flags from
`package.metadata.docs.rs` to the docs build.
## Solution
As [discussed on
Discord](https://discord.com/channels/691052431525675048/1272629407659589663/1272643734260940940),
I added the missing flags to `docs.yml`.
I also updated `rustc-args` to properly make use of the array (I think
the value has to be either a space-separated string *or* an array of
strings but not an array of space-separated strings 😆)
# Objective
- Fixes#14697
## Solution
This PR modifies the existing `all_tuples!` macro to optionally accept a
`#[doc(fake_variadic)]` attribute in its input. If the attribute is
present, each invocation of the impl macro gets the correct attributes
(i.e. the first impl receives `#[doc(fake_variadic)]` while the other
impls are hidden using `#[doc(hidden)]`.
Impls for the empty tuple (unit type) are left untouched (that's what
the [standard
library](https://doc.rust-lang.org/std/cmp/trait.PartialEq.html#impl-PartialEq-for-())
and
[serde](https://docs.rs/serde/latest/serde/trait.Serialize.html#impl-Serialize-for-())
do).
To work around https://github.com/rust-lang/cargo/issues/8811 and to get
impls on re-exports to correctly show up as variadic, `--cfg docsrs_dep`
is passed when building the docs for the toplevel `bevy` crate.
`#[doc(fake_variadic)]` only works on tuples and fn pointers, so impls
for structs like `AnyOf<(T1, T2, ..., Tn)>` are unchanged.
## Testing
I built the docs locally using `RUSTDOCFLAGS='--cfg docsrs'
RUSTFLAGS='--cfg docsrs_dep' cargo +nightly doc --no-deps --workspace`
and checked the documentation page of a trait both in its original crate
and the re-exported version in `bevy`.
The description should correctly mention for how many tuple items the
trait is implemented.
I added `rustc-args` for docs.rs to the `bevy` crate, I hope there
aren't any other notable crates that re-export `#[doc(fake_variadic)]`
traits.
---
## Showcase
`bevy_ecs::query::QueryData`:
<img width="1015" alt="Screenshot 2024-08-12 at 16 41 28"
src="https://github.com/user-attachments/assets/d40136ed-6731-475f-91a0-9df255cd24e3">
`bevy::ecs::query::QueryData` (re-export):
<img width="1005" alt="Screenshot 2024-08-12 at 16 42 57"
src="https://github.com/user-attachments/assets/71d44cf0-0ab0-48b0-9a51-5ce332594e12">
## Original Description
<details>
Resolves#14697
Submitting as a draft for now, very WIP.
Unfortunately, the docs don't show the variadics nicely when looking at
reexported items.
For example:
`bevy_ecs::bundle::Bundle` correctly shows the variadic impl:
![image](https://github.com/user-attachments/assets/90bf8af1-1d1f-4714-9143-cdd3d0199998)
while `bevy::ecs::bundle::Bundle` (the reexport) shows all the impls
(not good):
![image](https://github.com/user-attachments/assets/439c428e-f712-465b-bec2-481f7bf5870b)
Built using `RUSTDOCFLAGS='--cfg docsrs' cargo +nightly doc --workspace
--no-deps` (`--no-deps` because of wgpu-core).
Maybe I missed something or this is a limitation in the *totally not
private* `#[doc(fake_variadic)]` thingy. In any case I desperately need
some sleep now :))
</details>
This PR is based on top of #12982
# Objective
- Mesh2d currently only has an alpha blended phase. Most sprites don't
need transparency though.
- For some 2d games it can be useful to have a 2d depth buffer
## Solution
- Add an opaque phase to render Mesh2d that don't need transparency
- This phase currently uses the `SortedRenderPhase` to make it easier to
implement based on the already existing transparent phase. A follow up
PR will switch this to `BinnedRenderPhase`.
- Add a 2d depth buffer
- Use that depth buffer in the transparent phase to make sure that
sprites and transparent mesh2d are displayed correctly
## Testing
I added the mesh2d_transforms example that layers many opaque and
transparent mesh2d to make sure they all get displayed correctly. I also
confirmed it works with sprites by modifying that example locally.
---
## Changelog
- Added `AlphaMode2d`
- Added `Opaque2d` render phase
- Camera2d now have a `ViewDepthTexture` component
## Migration Guide
- `ColorMaterial` now contains `AlphaMode2d`. To keep previous
behaviour, use `AlphaMode::BLEND`. If you know your sprite is opaque,
use `AlphaMode::OPAQUE`
## Follow up PRs
- See tracking issue: #13265
---------
Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
Co-authored-by: Christopher Biscardi <chris@christopherbiscardi.com>
# Objective
Adds a new `Monitor` component representing a winit `MonitorHandle` that
can be used to spawn new windows and check for system monitor
information.
Closes#12955.
## Solution
For every winit event, check available monitors and spawn them into the
world as components.
## Testing
TODO:
- [x] Test plugging in and unplugging monitor during app runtime
- [x] Test spawning a window on a second monitor by entity id
- [ ] Since this touches winit, test all platforms
---
## Changelog
- Adds a new `Monitor` component that can be queried for information
about available system monitors.
## Migration Guide
- `WindowMode` variants now take a `MonitorSelection`, which can be set
to `MonitorSelection::Primary` to mirror the old behavior.
---------
Co-authored-by: Pascal Hertleif <pascal@technocreatives.com>
Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
Co-authored-by: Pascal Hertleif <killercup@gmail.com>
# Objective
- A lot of mid-level rendering apis are hard to figure out because they
don't have any examples
- SpecializedMeshPipeline can be really useful in some cases when you
want more flexibility than a Material without having to go to low level
apis.
## Solution
- Add an example showing how to make a custom `SpecializedMeshPipeline`.
## Testing
- Did you test these changes? If so, how?
- Are there any parts that need more testing?
- How can other people (reviewers) test your changes? Is there anything
specific they need to know?
- If relevant, what platforms did you test these changes on, and are
there any important ones you can't test?
---
## Showcase
The examples just spawns 3 triangles in a triangle pattern.
![image](https://github.com/user-attachments/assets/c3098758-94c4-4775-95e5-1d7c7fb9eb86)
---------
Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
# Objective
- Dynamic plugins were deprecated in #13080 due to being unsound. The
plan was to deprecate them in 0.14 and remove them in 0.15.
## Solution
- Remove all dynamic plugin functionality.
- Update documentation to reflect this change.
---
## Migration Guide
Dynamic plugins were deprecated in 0.14 for being unsound, and they have
now been fully removed. Please consider using the alternatives listed in
the `bevy_dynamic_plugin` crate documentation, or worst-case scenario
you may copy the code from 0.14.
# Objective
- Make it possible to know *what* changed your component or resource.
- Common need when debugging, when you want to know the last code
location that mutated a value in the ECS.
- This feature would be very useful for the editor alongside system
stepping.
## Solution
- Adds the caller location to column data.
- Mutations now `track_caller` all the way up to the public API.
- Commands that invoke these functions immediately call
`Location::caller`, and pass this into the functions, instead of the
functions themselves attempting to get the caller. This would not work
for commands which are deferred, as the commands are executed by the
scheduler, not the user's code.
## Testing
- The `component_change_detection` example now shows where the component
was mutated:
```
2024-07-28T06:57:48.946022Z INFO component_change_detection: Entity { index: 1, generation: 1 }: New value: MyComponent(0.0)
2024-07-28T06:57:49.004371Z INFO component_change_detection: Entity { index: 1, generation: 1 }: New value: MyComponent(1.0)
2024-07-28T06:57:49.012738Z WARN component_change_detection: Change detected!
-> value: Ref(MyComponent(1.0))
-> added: false
-> changed: true
-> changed by: examples/ecs/component_change_detection.rs:36:23
```
- It's also possible to inspect change location from a debugger:
<img width="608" alt="image"
src="https://github.com/user-attachments/assets/c90ecc7a-0462-457a-80ae-42e7f5d346b4">
---
## Changelog
- Added source locations to ECS change detection behind the
`track_change_detection` flag.
## Migration Guide
- Added `changed_by` field to many internal ECS functions used with
change detection when the `track_change_detection` feature flag is
enabled. Use Location::caller() to provide the source of the function
call.
---------
Co-authored-by: BD103 <59022059+BD103@users.noreply.github.com>
Co-authored-by: Gino Valente <49806985+MrGVSV@users.noreply.github.com>
# Objective
- Fix issue #2611
## Solution
- Add `--generate-link-to-definition` to all the `rustdoc-args` arrays
in the `Cargo.toml`s (for docs.rs)
- Add `--generate-link-to-definition` to the `RUSTDOCFLAGS` environment
variable in the docs workflow (for dev-docs.bevyengine.org)
- Document all the workspace crates in the docs workflow (needed because
otherwise only the source code of the `bevy` package will be included,
making the argument useless)
- I think this also fixes#3662, since it fixes the bug on
dev-docs.bevyengine.org, while on docs.rs it has been fixed for a while
on their side.
---
## Changelog
- The source code viewer on docs.rs now includes links to the
definitions.
# Objective
- Fixes: https://github.com/bevyengine/bevy/issues/14036
## Solution
- Add a world space transformation for the environment sample direction.
## Testing
- I have tested the newly added `transform` field using the newly added
`rotate_environment_map` example.
https://github.com/user-attachments/assets/2de77c65-14bc-48ee-b76a-fb4e9782dbdb
## Migration Guide
- Since we have added a new filed to the `EnvironmentMapLight` struct,
users will need to include `..default()` or some rotation value in their
initialization code.
Progress towards https://github.com/bevyengine/bevy/issues/7386.
Following discussion
https://discord.com/channels/691052431525675048/1253260494538539048/1253387942311886960
This Pull Request adds an example to detect system order ambiguities,
and also asserts none exist.
A lot of schedules are ignored in ordered to have the test passing, we
should thrive to make them pass, but in other pull requests.
<details><summary>example output <b>summary</b>, without ignored
schedules</summary>
<p>
```txt
$ cargo run --example ambiguity_detection 2>&1 | grep -C 1 "pairs of syst"
2024-06-21T13:17:55.776585Z WARN bevy_ecs::schedule::schedule: Schedule First has ambiguities.
1 pairs of systems with conflicting data access have indeterminate execution order. Consider adding `before`, `after`, or `ambiguous_with` relationships between these:
-- bevy_time::time_system (in set TimeSystem) and bevy_ecs::event::event_update_system (in set EventUpdates)
--
2024-06-21T13:17:55.782265Z WARN bevy_ecs::schedule::schedule: Schedule PreUpdate has ambiguities.
11 pairs of systems with conflicting data access have indeterminate execution order. Consider adding `before`, `after`, or `ambiguous_with` relationships between these:
-- bevy_pbr::prepass::update_mesh_previous_global_transforms and bevy_asset::server::handle_internal_asset_events
--
2024-06-21T13:17:55.809516Z WARN bevy_ecs::schedule::schedule: Schedule PostUpdate has ambiguities.
63 pairs of systems with conflicting data access have indeterminate execution order. Consider adding `before`, `after`, or `ambiguous_with` relationships between these:
-- bevy_ui::accessibility::image_changed and bevy_ecs::schedule::executor::apply_deferred
--
2024-06-21T13:17:55.816287Z WARN bevy_ecs::schedule::schedule: Schedule Last has ambiguities.
3 pairs of systems with conflicting data access have indeterminate execution order. Consider adding `before`, `after`, or `ambiguous_with` relationships between these:
-- bevy_gizmos::update_gizmo_meshes<bevy_gizmos::aabb::AabbGizmoConfigGroup> (in set UpdateGizmoMeshes) and bevy_gizmos::update_gizmo_meshes<bevy_gizmos::light::LightGizmoConfigGroup> (in set UpdateGizmoMeshes)
--
2024-06-21T13:17:55.831074Z WARN bevy_ecs::schedule::schedule: Schedule ExtractSchedule has ambiguities.
296 pairs of systems with conflicting data access have indeterminate execution order. Consider adding `before`, `after`, or `ambiguous_with` relationships between these:
-- bevy_render::extract_component::extract_components<bevy_sprite::SpriteSource> and bevy_render::render_asset::extract_render_asset<bevy_sprite::mesh2d::material::PreparedMaterial2d<bevy_sprite::mesh2d::color_material::ColorMaterial>>
```
</p>
</details>
To try locally:
```sh
CI_TESTING_CONFIG="./.github/example-run/ambiguity_detection.ron" cargo run --example ambiguity_detection --features "bevy_ci_testing,trace,trace_chrome"
```
---------
Co-authored-by: Jan Hohenheim <jan@hohenheim.ch>
# Objective
Fill a gap in the functionality of our curve constructions by allowing
users to easily build cyclic curves from control data.
## Solution
Here I opted for something lightweight and discoverable. There is a new
`CyclicCubicGenerator` trait with a method `to_curve_cyclic` which uses
splines' control data to create curves that are cyclic. For now, its
signature is exactly like that of `CubicGenerator` — `to_curve_cyclic`
just yields a `CubicCurve`:
```rust
/// Implement this on cubic splines that can generate a cyclic cubic curve from their spline parameters.
///
/// This makes sense only when the control data can be interpreted cyclically.
pub trait CyclicCubicGenerator<P: VectorSpace> {
/// Build a cyclic [`CubicCurve`] by computing the interpolation coefficients for each curve segment.
fn to_curve_cyclic(&self) -> CubicCurve<P>;
}
```
This trait has been implemented for `CubicHermite`,
`CubicCardinalSpline`, `CubicBSpline`, and `LinearSpline`:
<img width="753" alt="Screenshot 2024-07-01 at 8 58 27 PM"
src="https://github.com/bevyengine/bevy/assets/2975848/69ae0802-3b78-4fb9-b73a-6f842cf3b33c">
<img width="628" alt="Screenshot 2024-07-01 at 9 00 14 PM"
src="https://github.com/bevyengine/bevy/assets/2975848/2992175a-a96c-40fc-b1a1-5206c3572cde">
<img width="606" alt="Screenshot 2024-07-01 at 8 59 36 PM"
src="https://github.com/bevyengine/bevy/assets/2975848/9e99eb3a-dbe6-42da-886c-3d3e00410d03">
<img width="603" alt="Screenshot 2024-07-01 at 8 59 01 PM"
src="https://github.com/bevyengine/bevy/assets/2975848/d037bc0c-396a-43af-ab5c-fad9a29417ef">
(Each type pictured respectively with the control points rendered as
green spheres; tangents not pictured in the case of the Hermite spline.)
These curves are all parametrized so that the output of `to_curve` and
the output of `to_curve_cyclic` are similar. For instance, in
`CubicCardinalSpline`, the first output segment is a curve segment
joining the first and second control points in each, although it is
constructed differently. In the other cases, the segments from
`to_curve` are a subset of those in `to_curve_cyclic`, with the new
segments appearing at the end.
## Testing
I rendered cyclic splines from control data and made sure they looked
reasonable. Existing tests are intact for splines where previous code
was modified. (Note that the coefficient computation for cyclic spline
segments is almost verbatim identical to that of their non-cyclic
counterparts.)
The Bezier benchmarks also look fine.
---
## Changelog
- Added `CyclicCubicGenerator` trait to `bevy_math::cubic_splines` for
creating cyclic curves from control data.
- Implemented `CyclicCubicGenerator` for `CubicHermite`,
`CubicCardinalSpline`, `CubicBSpline`, and `LinearSpline`.
- `bevy_math` now depends on `itertools`.
---
## Discussion
### Design decisions
The biggest thing here is just the approach taken in the first place:
namely, the cyclic constructions use new methods on the same old
structs. This choice was made to reduce friction and increase
discoverability but also because creating new ones just seemed
unnecessary: the underlying data would have been the same, so creating
something like "`CyclicCubicBSpline`" whose internally-held control data
is regarded as cyclic in nature doesn't really accomplish much — the end
result for the user is basically the same either way.
Similarly, I don't presently see a pressing need for `to_curve_cyclic`
to output something other than a `CubicCurve`, although changing this in
the future may be useful. See below.
A notable omission here is that `CyclicCubicGenerator` is not
implemented for `CubicBezier`. This is not a gap waiting to be filled —
`CubicBezier` just doesn't have enough data to join its start with its
end without just making up the requisite control points wholesale. In
all the cases where `CyclicCubicGenerator` has been implemented here,
the fashion in which the ends are connected is quite natural and follows
the semantics of the associated spline construction.
### Future direction
There are two main things here:
1. We should investigate whether we should do something similar for
NURBS. I just don't know that much about NURBS at the moment, so I
regarded this as out of scope for the PR.
2. We may eventually want to change the output type of
`CyclicCubicGenerator::to_curve_cyclic` to a type which reifies the
cyclic nature of the curve output. This wasn't done in this PR because
I'm unsure how much value a type-level guarantee of cyclicity actually
has, but if some useful features make sense only in the case of cyclic
curves, this might be worth pursuing.
Currently, volumetric fog is global and affects the entire scene
uniformly. This is inadequate for many use cases, such as local smoke
effects. To address this problem, this commit introduces *fog volumes*,
which are axis-aligned bounding boxes (AABBs) that specify fog
parameters inside their boundaries. Such volumes can also specify a
*density texture*, a 3D texture of voxels that specifies the density of
the fog at each point.
To create a fog volume, add a `FogVolume` component to an entity (which
is included in the new `FogVolumeBundle` convenience bundle). Like light
probes, a fog volume is conceptually a 1×1×1 cube centered on the
origin; a transform can be used to position and resize this region. Many
of the fields on the existing `VolumetricFogSettings` have migrated to
the new `FogVolume` component. `VolumetricFogSettings` on a camera is
still needed to enable volumetric fog. However, by itself
`VolumetricFogSettings` is no longer sufficient to enable volumetric
fog; a `FogVolume` must be present. Applications that wish to retain the
old global fog behavior can simply surround the scene with a large fog
volume.
By way of implementation, this commit converts the volumetric fog shader
from a full-screen shader to one applied to a mesh. The strategy is
different depending on whether the camera is inside or outside the fog
volume. If the camera is inside the fog volume, the mesh is simply a
plane scaled to the viewport, effectively falling back to a full-screen
pass. If the camera is outside the fog volume, the mesh is a cube
transformed to coincide with the boundaries of the fog volume's AABB.
Importantly, in the latter case, only the front faces of the cuboid are
rendered. Instead of treating the boundaries of the fog as a sphere
centered on the camera position, as we did prior to this patch, we
raytrace the far planes of the AABB to determine the portion of each ray
contained within the fog volume. We then raymarch in shadow map space as
usual. If a density texture is present, we modulate the fixed density
value with the trilinearly-interpolated value from that texture.
Furthermore, this patch introduces optional jitter to fog volumes,
intended for use with TAA. This modifies the position of the ray from
frame to frame using interleaved gradient noise, in order to reduce
aliasing artifacts. Many implementations of volumetric fog in games use
this technique. Note that this patch makes no attempt to write a motion
vector; this is because when a view ray intersects multiple voxels
there's no single direction of motion. Consequently, fog volumes can
have ghosting artifacts, but because fog is "ghostly" by its nature,
these artifacts are less objectionable than they would be for opaque
objects.
A new example, `fog_volumes`, has been added. It demonstrates a single
fog volume containing a voxelized representation of the Stanford bunny.
The existing `volumetric_fog` example has been updated to use the new
local volumetrics API.
## Changelog
### Added
* Local `FogVolume`s are now supported, to localize fog to specific
regions. They can optionally have 3D density voxel textures for precise
control over the distribution of the fog.
### Changed
* `VolumetricFogSettings` on a camera no longer enables volumetric fog;
instead, it simply enables the processing of `FogVolume`s within the
scene.
## Migration Guide
* A `FogVolume` is now necessary in order to enable volumetric fog, in
addition to `VolumetricFogSettings` on the camera. Existing uses of
volumetric fog can be migrated by placing a large `FogVolume`
surrounding the scene.
---------
Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
Co-authored-by: François Mockers <mockersf@gmail.com>
# Objective
The borders example is separate from the rounded borders example. If you
find the borders example, you may miss the rounded borders example.
## Solution
Merge the examples in a basic way, since there is enough room to show
all options at the same time.
I also considered renaming the borders and rounded borders examples so
that they would be located next to each other in repo and UI, but it
felt like having a singular example was better.
## Testing
```
cargo run --example borders
```
---
## Showcase
The merged example looks like this:
![screenshot-2024-07-14-at-13 40
10@2x](https://github.com/user-attachments/assets/0f49cc46-1ca0-40d0-abec-020cbf0fb205)
# Objective
Type data is a **super** useful tool to know about when working with
reflection. However, most users don't fully understand how it works or
that you can use it for more than just object-safe traits.
This is unfortunate because it can be surprisingly simple to manually
create your own type data.
We should have an example detailing how type works, how users can define
their own, and how thy can be used.
## Solution
Added a `type_data` example.
This example goes through all the major points about type data:
- Why we need them
- How they can be defined
- The two ways they can be registered
- A list of common/important type data provided by Bevy
I also thought it might be good to go over the `#[reflect_trait]` macro
as part of this example since it has all the other context, including
how to define type data in places where `#[reflect_trait]` won't work.
Because of this, I removed the `trait_reflection` example.
## Testing
You can run the example locally with the following command:
```
cargo run --example type_data
```
---
## Changelog
- Added the `type_data` example
- Removed the `trait_reflection` example
This commit creates a new built-in postprocessing shader that's designed
to hold miscellaneous postprocessing effects, and starts it off with
chromatic aberration. Possible future effects include vignette, film
grain, and lens distortion.
[Chromatic aberration] is a common postprocessing effect that simulates
lenses that fail to focus all colors of light to a single point. It's
often used for impact effects and/or horror games. This patch uses the
technique from *Inside* ([Gjøl & Svendsen 2016]), which allows the
developer to customize the particular color pattern to achieve different
effects. Unity HDRP uses the same technique, while Unreal has a
hard-wired fixed color pattern.
A new example, `post_processing`, has been added, in order to
demonstrate the technique. The existing `post_processing` shader has
been renamed to `custom_post_processing`, for clarity.
[Chromatic aberration]:
https://en.wikipedia.org/wiki/Chromatic_aberration
[Gjøl & Svendsen 2016]:
https://github.com/playdeadgames/publications/blob/master/INSIDE/rendering_inside_gdc2016.pdf
![Screenshot 2024-06-04
180304](https://github.com/bevyengine/bevy/assets/157897/3631c64f-a615-44fe-91ca-7f04df0a54b2)
![Screenshot 2024-06-04
180743](https://github.com/bevyengine/bevy/assets/157897/ee055cbf-4314-49c5-8bfa-8d8a17bd52bb)
## Changelog
### Added
* Chromatic aberration is now available as a built-in postprocessing
effect. To use it, add `ChromaticAberration` to your camera.
# Objective
Add basic bubbling to observers, modeled off `bevy_eventlistener`.
## Solution
- Introduce a new `Traversal` trait for components which point to other
entities.
- Provide a default `TraverseNone: Traversal` component which cannot be
constructed.
- Implement `Traversal` for `Parent`.
- The `Event` trait now has an associated `Traversal` which defaults to
`TraverseNone`.
- Added a field `bubbling: &mut bool` to `Trigger` which can be used to
instruct the runner to bubble the event to the entity specified by the
event's traversal type.
- Added an associated constant `SHOULD_BUBBLE` to `Event` which
configures the default bubbling state.
- Added logic to wire this all up correctly.
Introducing the new associated information directly on `Event` (instead
of a new `BubblingEvent` trait) lets us dispatch both bubbling and
non-bubbling events through the same api.
## Testing
I have added several unit tests to cover the common bugs I identified
during development. Running the unit tests should be enough to validate
correctness. The changes effect unsafe portions of the code, but should
not change any of the safety assertions.
## Changelog
Observers can now bubble up the entity hierarchy! To create a bubbling
event, change your `Derive(Event)` to something like the following:
```rust
#[derive(Component)]
struct MyEvent;
impl Event for MyEvent {
type Traverse = Parent; // This event will propagate up from child to parent.
const AUTO_PROPAGATE: bool = true; // This event will propagate by default.
}
```
You can dispatch a bubbling event using the normal
`world.trigger_targets(MyEvent, entity)`.
Halting an event mid-bubble can be done using
`trigger.propagate(false)`. Events with `AUTO_PROPAGATE = false` will
not propagate by default, but you can enable it using
`trigger.propagate(true)`.
If there are multiple observers attached to a target, they will all be
triggered by bubbling. They all share a bubbling state, which can be
accessed mutably using `trigger.propagation_mut()` (`trigger.propagate`
is just sugar for this).
You can choose to implement `Traversal` for your own types, if you want
to bubble along a different structure than provided by `bevy_hierarchy`.
Implementers must be careful never to produce loops, because this will
cause bevy to hang.
## Migration Guide
+ Manual implementations of `Event` should add associated type `Traverse
= TraverseNone` and associated constant `AUTO_PROPAGATE = false`;
+ `Trigger::new` has new field `propagation: &mut Propagation` which
provides the bubbling state.
+ `ObserverRunner` now takes the same `&mut Propagation` as a final
parameter.
---------
Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
Co-authored-by: Torstein Grindvik <52322338+torsteingrindvik@users.noreply.github.com>
Co-authored-by: Carter Anderson <mcanders1@gmail.com>
# Objective
Function reflection requires a lot of macro code generation in the form
of several `all_tuples!` invocations, as well as impls generated in the
`Reflect` derive macro.
Seeing as function reflection is currently a bit more niche, it makes
sense to gate it all behind a feature.
## Solution
Add a `functions` feature to `bevy_reflect`, which can be enabled in
Bevy using the `reflect_functions` feature.
## Testing
You can test locally by running:
```
cargo test --package bevy_reflect
```
That should ensure that everything still works with the feature
disabled.
To test with the feature on, you can run:
```
cargo test --package bevy_reflect --features functions
```
---
## Changelog
- Moved function reflection behind a Cargo feature
(`bevy/reflect_functions` and `bevy_reflect/functions`)
- Add `IntoFunction` export in `bevy_reflect::prelude`
## Internal Migration Guide
> [!important]
> Function reflection was introduced as part of the 0.15 dev cycle. This
migration guide was written for developers relying on `main` during this
cycle, and is not a breaking change coming from 0.14.
Function reflection is now gated behind a feature. To use function
reflection, enable the feature:
- If using `bevy_reflect` directly, enable the `functions` feature
- If using `bevy`, enable the `reflect_functions` feature
# Objective
- Moves the smooth_follow.rs into movement directory in examples
- Fixes#14241
## Solution
- Move the smooth_follow.rs to movement dir in examples.
_copy-pasted from my doc comment in the code_
# Objective
This example shows how to properly handle player input, advance a
physics simulation in a fixed timestep, and display the results.
The classic source for how and why this is done is Glenn Fiedler's
article [Fix Your
Timestep!](https://gafferongames.com/post/fix_your_timestep/).
## Motivation
The naive way of moving a player is to just update their position like
so:
```rust
transform.translation += velocity;
```
The issue here is that the player's movement speed will be tied to the
frame rate.
Faster machines will move the player faster, and slower machines will
move the player slower.
In fact, you can observe this today when running some old games that did
it this way on modern hardware!
The player will move at a breakneck pace.
The more sophisticated way is to update the player's position based on
the time that has passed:
```rust
transform.translation += velocity * time.delta_seconds();
```
This way, velocity represents a speed in units per second, and the
player will move at the same speed regardless of the frame rate.
However, this can still be problematic if the frame rate is very low or
very high. If the frame rate is very low, the player will move in large
jumps. This may lead to a player moving in such large jumps that they
pass through walls or other obstacles. In general, you cannot expect a
physics simulation to behave nicely with *any* delta time. Ideally, we
want to have some stability in what kinds of delta times we feed into
our physics simulation.
The solution is using a fixed timestep. This means that we advance the
physics simulation by a fixed amount at a time. If the real time that
passed between two frames is less than the fixed timestep, we simply
don't advance the physics simulation at all.
If it is more, we advance the physics simulation multiple times until we
catch up. You can read more about how Bevy implements this in the
documentation for
[`bevy::time::Fixed`](https://docs.rs/bevy/latest/bevy/time/struct.Fixed.html).
This leaves us with a last problem, however. If our physics simulation
may advance zero or multiple times per frame, there may be frames in
which the player's position did not need to be updated at all, and some
where it is updated by a large amount that resulted from running the
physics simulation multiple times. This is physically correct, but
visually jarring. Imagine a player moving in a straight line, but
depending on the frame rate, they may sometimes advance by a large
amount and sometimes not at all. Visually, we want the player to move
smoothly. This is why we need to separate the player's position in the
physics simulation from the player's position in the visual
representation. The visual representation can then be interpolated
smoothly based on the last and current actual player position in the
physics simulation.
This is a tradeoff: every visual frame is now slightly lagging behind
the actual physical frame, but in return, the player's movement will
appear smooth. There are other ways to compute the visual representation
of the player, such as extrapolation. See the [documentation of the
lightyear
crate](https://cbournhonesque.github.io/lightyear/book/concepts/advanced_replication/visual_interpolation.html)
for a nice overview of the different methods and their tradeoffs.
## Implementation
- The player's velocity is stored in a `Velocity` component. This is the
speed in units per second.
- The player's current position in the physics simulation is stored in a
`PhysicalTranslation` component.
- The player's previous position in the physics simulation is stored in
a `PreviousPhysicalTranslation` component.
- The player's visual representation is stored in Bevy's regular
`Transform` component.
- Every frame, we go through the following steps:
- Advance the physics simulation by one fixed timestep in the
`advance_physics` system.
This is run in the `FixedUpdate` schedule, which runs before the
`Update` schedule.
- Update the player's visual representation in the
`update_displayed_transform` system.
This interpolates between the player's previous and current position in
the physics simulation.
- Update the player's velocity based on the player's input in the
`handle_input` system.
## Relevant Issues
Related to #1259.
I'm also fairly sure I've seen an issue somewhere made by
@alice-i-cecile about showing how to move a character correctly in a
fixed timestep, but I cannot find it.
Bump version after release
This PR has been auto-generated
Co-authored-by: Bevy Auto Releaser <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: François Mockers <mockersf@gmail.com>
# Replace ab_glyph with the more capable cosmic-text
Fixes#7616.
Cosmic-text is a more mature text-rendering library that handles scripts
and ligatures better than ab_glyph, it can also handle system fonts
which can be implemented in bevy in the future
Rebase of https://github.com/bevyengine/bevy/pull/8808
## Changelog
Replaces text renderer ab_glyph with cosmic-text
The definition of the font size has changed with the migration to cosmic
text. The behavior is now consistent with other platforms (e.g. the
web), where the font size in pixels measures the height of the font (the
distance between the top of the highest ascender and the bottom of the
lowest descender). Font sizes in your app need to be rescaled to
approximately 1.2x smaller; for example, if you were using a font size
of 60.0, you should now use a font size of 50.0.
## Migration guide
- `Text2dBounds` has been replaced with `TextBounds`, and it now accepts
`Option`s to the bounds, instead of using `f32::INFINITY` to inidicate
lack of bounds
- Textsizes should be changed, dividing the current size with 1.2 will
result in the same size as before.
- `TextSettings` struct is removed
- Feature `subpixel_alignment` has been removed since cosmic-text
already does this automatically
- TextBundles and things rendering texts requires the `CosmicBuffer`
Component on them as well
## Suggested followups:
- TextPipeline: reconstruct byte indices for keeping track of eventual
cursors in text input
- TextPipeline: (future work) split text entities into section entities
- TextPipeline: (future work) text editing
- Support line height as an option. Unitless `1.2` is the default used
in browsers (1.2x font size).
- Support System Fonts and font families
- Example showing of animated text styles. Eg. throbbing hyperlinks
---------
Co-authored-by: tigregalis <anak.harimau@gmail.com>
Co-authored-by: Nico Burns <nico@nicoburns.com>
Co-authored-by: sam edelsten <samedelsten1@gmail.com>
Co-authored-by: Dimchikkk <velo.app1@gmail.com>
Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
Co-authored-by: Rob Parrett <robparrett@gmail.com>
# Objective
- Standard Material is starting to run out of samplers (currently uses
13 with no additional features off, I think in 0.13 it was 12).
- This change adds a new feature switch, modelled on the other ones
which add features to Standard Material, to turn off the new anisotropy
feature by default.
## Solution
- feature + texture define
## Testing
- Anisotropy example still works fine
- Other samples work fine
- Standard Material now takes 12 samplers by default on my Mac instead
of 13
## Migration Guide
- Add feature pbr_anisotropy_texture if you are using that texture in
any standard materials.
---------
Co-authored-by: John Payne <20407779+johngpayne@users.noreply.github.com>
# Objective
It's not always obvious what the default value for `RenderLayers`
represents. It is documented, but since it's an implementation of a
trait method the documentation may or may not be shown depending on the
IDE.
## Solution
Add documentation to the `none` method that explicitly calls out the
difference.
---------
Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
# Objective
We're able to reflect types sooooooo... why not functions?
The goal of this PR is to make functions callable within a dynamic
context, where type information is not readily available at compile
time.
For example, if we have a function:
```rust
fn add(left: i32, right: i32) -> i32 {
left + right
}
```
And two `Reflect` values we've already validated are `i32` types:
```rust
let left: Box<dyn Reflect> = Box::new(2_i32);
let right: Box<dyn Reflect> = Box::new(2_i32);
```
We should be able to call `add` with these values:
```rust
// ?????
let result: Box<dyn Reflect> = add.call_dynamic(left, right);
```
And ideally this wouldn't just work for functions, but methods and
closures too!
Right now, users have two options:
1. Manually parse the reflected data and call the function themselves
2. Rely on registered type data to handle the conversions for them
For a small function like `add`, this isn't too bad. But what about for
more complex functions? What about for many functions?
At worst, this process is error-prone. At best, it's simply tedious.
And this is assuming we know the function at compile time. What if we
want to accept a function dynamically and call it with our own
arguments?
It would be much nicer if `bevy_reflect` could alleviate some of the
problems here.
## Solution
Added function reflection!
This adds a `DynamicFunction` type to wrap a function dynamically. This
can be called with an `ArgList`, which is a dynamic list of
`Reflect`-containing `Arg` arguments. It returns a `FunctionResult`
which indicates whether or not the function call succeeded, returning a
`Reflect`-containing `Return` type if it did succeed.
Many functions can be converted into this `DynamicFunction` type thanks
to the `IntoFunction` trait.
Taking our previous `add` example, this might look something like
(explicit types added for readability):
```rust
fn add(left: i32, right: i32) -> i32 {
left + right
}
let mut function: DynamicFunction = add.into_function();
let args: ArgList = ArgList::new().push_owned(2_i32).push_owned(2_i32);
let result: Return = function.call(args).unwrap();
let value: Box<dyn Reflect> = result.unwrap_owned();
assert_eq!(value.take::<i32>().unwrap(), 4);
```
And it also works on closures:
```rust
let add = |left: i32, right: i32| left + right;
let mut function: DynamicFunction = add.into_function();
let args: ArgList = ArgList::new().push_owned(2_i32).push_owned(2_i32);
let result: Return = function.call(args).unwrap();
let value: Box<dyn Reflect> = result.unwrap_owned();
assert_eq!(value.take::<i32>().unwrap(), 4);
```
As well as methods:
```rust
#[derive(Reflect)]
struct Foo(i32);
impl Foo {
fn add(&mut self, value: i32) {
self.0 += value;
}
}
let mut foo = Foo(2);
let mut function: DynamicFunction = Foo::add.into_function();
let args: ArgList = ArgList::new().push_mut(&mut foo).push_owned(2_i32);
function.call(args).unwrap();
assert_eq!(foo.0, 4);
```
### Limitations
While this does cover many functions, it is far from a perfect system
and has quite a few limitations. Here are a few of the limitations when
using `IntoFunction`:
1. The lifetime of the return value is only tied to the lifetime of the
first argument (useful for methods). This means you can't have a
function like `(a: i32, b: &i32) -> &i32` without creating the
`DynamicFunction` manually.
2. Only 15 arguments are currently supported. If the first argument is a
(mutable) reference, this number increases to 16.
3. Manual implementations of `Reflect` will need to implement the new
`FromArg`, `GetOwnership`, and `IntoReturn` traits in order to be used
as arguments/return types.
And some limitations of `DynamicFunction` itself:
1. All arguments share the same lifetime, or rather, they will shrink to
the shortest lifetime.
2. Closures that capture their environment may need to have their
`DynamicFunction` dropped before accessing those variables again (there
is a `DynamicFunction::call_once` to make this a bit easier)
3. All arguments and return types must implement `Reflect`. While not a
big surprise coming from `bevy_reflect`, this implementation could
actually still work by swapping `Reflect` out with `Any`. Of course,
that makes working with the arguments and return values a bit harder.
4. Generic functions are not supported (unless they have been manually
monomorphized)
And general, reflection gotchas:
1. `&str` does not implement `Reflect`. Rather, `&'static str`
implements `Reflect` (the same is true for `&Path` and similar types).
This means that `&'static str` is considered an "owned" value for the
sake of generating arguments. Additionally, arguments and return types
containing `&str` will assume it's `&'static str`, which is almost never
the desired behavior. In these cases, the only solution (I believe) is
to use `&String` instead.
### Followup Work
This PR is the first of two PRs I intend to work on. The second PR will
aim to integrate this new function reflection system into the existing
reflection traits and `TypeInfo`. The goal would be to register and call
a reflected type's methods dynamically.
I chose not to do that in this PR since the diff is already quite large.
I also want the discussion for both PRs to be focused on their own
implementation.
Another followup I'd like to do is investigate allowing common container
types as a return type, such as `Option<&[mut] T>` and `Result<&[mut] T,
E>`. This would allow even more functions to opt into this system. I
chose to not include it in this one, though, for the same reasoning as
previously mentioned.
### Alternatives
One alternative I had considered was adding a macro to convert any
function into a reflection-based counterpart. The idea would be that a
struct that wraps the function would be created and users could specify
which arguments and return values should be `Reflect`. It could then be
called via a new `Function` trait.
I think that could still work, but it will be a fair bit more involved,
requiring some slightly more complex parsing. And it of course is a bit
more work for the user, since they need to create the type via macro
invocation.
It also makes registering these functions onto a type a bit more
complicated (depending on how it's implemented).
For now, I think this is a fairly simple, yet powerful solution that
provides the least amount of friction for users.
---
## Showcase
Bevy now adds support for storing and calling functions dynamically
using reflection!
```rust
// 1. Take a standard Rust function
fn add(left: i32, right: i32) -> i32 {
left + right
}
// 2. Convert it into a type-erased `DynamicFunction` using the `IntoFunction` trait
let mut function: DynamicFunction = add.into_function();
// 3. Define your arguments from reflected values
let args: ArgList = ArgList::new().push_owned(2_i32).push_owned(2_i32);
// 4. Call the function with your arguments
let result: Return = function.call(args).unwrap();
// 5. Extract the return value
let value: Box<dyn Reflect> = result.unwrap_owned();
assert_eq!(value.take::<i32>().unwrap(), 4);
```
## Changelog
#### TL;DR
- Added support for function reflection
- Added a new `Function Reflection` example:
ba727898f2/examples/reflection/function_reflection.rs (L1-L157)
#### Details
Added the following items:
- `ArgError` enum
- `ArgId` enum
- `ArgInfo` struct
- `ArgList` struct
- `Arg` enum
- `DynamicFunction` struct
- `FromArg` trait (derived with `derive(Reflect)`)
- `FunctionError` enum
- `FunctionInfo` struct
- `FunctionResult` alias
- `GetOwnership` trait (derived with `derive(Reflect)`)
- `IntoFunction` trait (with blanket implementation)
- `IntoReturn` trait (derived with `derive(Reflect)`)
- `Ownership` enum
- `ReturnInfo` struct
- `Return` enum
---------
Co-authored-by: Periwink <charlesbour@gmail.com>
As reported in #14004, many third-party plugins, such as Hanabi, enqueue
entities that don't have meshes into render phases. However, the
introduction of indirect mode added a dependency on mesh-specific data,
breaking this workflow. This is because GPU preprocessing requires that
the render phases manage indirect draw parameters, which don't apply to
objects that aren't meshes. The existing code skips over binned entities
that don't have indirect draw parameters, which causes the rendering to
be skipped for such objects.
To support this workflow, this commit adds a new field,
`non_mesh_items`, to `BinnedRenderPhase`. This field contains a simple
list of (bin key, entity) pairs. After drawing batchable and unbatchable
objects, the non-mesh items are drawn one after another. Bevy itself
doesn't enqueue any items into this list; it exists solely for the
application and/or plugins to use.
Additionally, this commit switches the asset ID in the standard bin keys
to be an untyped asset ID rather than that of a mesh. This allows more
flexibility, allowing bins to be keyed off any type of asset.
This patch adds a new example, `custom_phase_item`, which simultaneously
serves to demonstrate how to use this new feature and to act as a
regression test so this doesn't break again.
Fixes#14004.
## Changelog
### Added
* `BinnedRenderPhase` now contains a `non_mesh_items` field for plugins
to add custom items to.
# Objective
- Fixes#13728
## Solution
- add a new feature `smaa_luts`. if enables, it also enables `ktx2` and
`zstd`. if not, it doesn't load the files but use placeholders instead
- adds all the resources needed in the same places that system that uses
them are added.
# Objective
A very common way to organize a first-person view is to split it into
two kinds of models:
- The *view model* is the model that represents the player's body.
- The *world model* is everything else.
The reason for this distinction is that these two models should be
rendered with different FOVs.
The view model is typically designed and animated with a very specific
FOV in mind, so it is
generally *fixed* and cannot be changed by a player. The world model, on
the other hand, should
be able to change its FOV to accommodate the player's preferences for
the following reasons:
- *Accessibility*: How prone is the player to motion sickness? A wider
FOV can help.
- *Tactical preference*: Does the player want to see more of the
battlefield?
Or have a more zoomed-in view for precision aiming?
- *Physical considerations*: How well does the in-game FOV match the
player's real-world FOV?
Are they sitting in front of a monitor or playing on a TV in the living
room? How big is the screen?
## Solution
I've added an example implementing the described setup as follows.
The `Player` is an entity holding two cameras, one for each model. The
view model camera has a fixed
FOV of 70 degrees, while the world model camera has a variable FOV that
can be changed by the player.
I use different `RenderLayers` to select what to render.
- The world model camera has no explicit `RenderLayers` component, so it
uses the layer 0.
All static objects in the scene are also on layer 0 for the same reason.
- The view model camera has a `RenderLayers` component with layer 1, so
it only renders objects
explicitly assigned to layer 1. The arm of the player is one such
object.
The order of the view model camera is additionally bumped to 1 to ensure
it renders on top of the world model.
- The light source in the scene must illuminate both the view model and
the world model, so it is
assigned to both layers 0 and 1.
To better see the effect, the player can move the camera by dragging
their mouse and change the world model's FOV with the arrow keys. The
arrow up key maps to "decrease FOV" and the arrow down key maps to
"increase FOV". This sounds backwards on paper, but is more intuitive
when actually changing the FOV in-game since a decrease in FOV looks
like a zoom-in.
I intentionally do not allow changing the view model's FOV even though
it would be illustrative because that would be an anti-pattern and bloat
the code a bit.
The example is called `first_person_view_model` and not just
`first_person` because I want to highlight that this is not a simple
flycam, but actually renders the player.
## Testing
Default FOV:
<img width="1392" alt="image"
src="https://github.com/bevyengine/bevy/assets/9047632/8c2e804f-fac2-48c7-8a22-d85af999dfb2">
Decreased FOV:
<img width="1392" alt="image"
src="https://github.com/bevyengine/bevy/assets/9047632/1733b3e5-f583-4214-a454-3554e3cbd066">
Increased FOV:
<img width="1392" alt="image"
src="https://github.com/bevyengine/bevy/assets/9047632/0b0640e6-5743-46f6-a79a-7181ba9678e8">
Note that the white bar on the right represents the player's arm, which
is more obvious in-game because you can move the camera around.
The box on top is there to make sure that the view model is receiving
shadows.
I tested only on macOS.
---
## Changelog
I don't think new examples go in here, do they?
## Caveat
The solution used here was implemented with help by @robtfm on
[Discord](https://discord.com/channels/691052431525675048/866787577687310356/1241019224491561000):
> shadow maps are specific to lights, not to layers
> if you want shadows from some meshes that are not visible, you could
have light on layer 1+2, meshes on layer 2, camera on layer 1 (for
example)
> but this might change in future, it's not exactly an intended feature
In other words, the example code as-is is not guaranteed to work in the
future. I want to bring this up because the use-case presented here is
extremely common in first-person games and important for accessibility.
It would be good to have a blessed and easy way of how to achieve it.
I'm also not happy about how I get the `perspective` variable in
`change_fov`. Very open to suggestions :)
## Related issues
- Addresses parts of #12658
- Addresses parts of #12588
---------
Co-authored-by: Pascal Hertleif <killercup@gmail.com>
# Objective
This is the first of a series of PRs intended to begin the upstreaming
process for `bevy_mod_picking`. The purpose of this PR is to:
+ Create the new `bevy_picking` crate
+ Upstream `CorePlugin` as `PickingPlugin`
+ Upstream the core pointer and backend abstractions.
This code has been ported verbatim from the corresponding files in
[bevy_picking_core](https://github.com/aevyrie/bevy_mod_picking/tree/main/crates/bevy_picking_core/src)
with a few tiny naming and docs tweaks.
The work here is only an initial foothold to get the up-streaming
process started in earnest. We can do refactoring and improvements once
this is in-tree.
---------
Co-authored-by: Aevyrie <aevyrie@gmail.com>
Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
# Objective
- Provide an expressive way to register dynamic behavior in response to
ECS changes that is consistent with existing bevy types and traits as to
provide a smooth user experience.
- Provide a mechanism for immediate changes in response to events during
command application in order to facilitate improved query caching on the
path to relations.
## Solution
- A new fundamental ECS construct, the `Observer`; inspired by flec's
observers but adapted to better fit bevy's access patterns and rust's
type system.
---
## Examples
There are 3 main ways to register observers. The first is a "component
observer" that looks like this:
```rust
world.observe(|trigger: Trigger<OnAdd, Transform>, query: Query<&Transform>| {
let transform = query.get(trigger.entity()).unwrap();
});
```
The above code will spawn a new entity representing the observer that
will run it's callback whenever the `Transform` component is added to an
entity. This is a system-like function that supports dependency
injection for all the standard bevy types: `Query`, `Res`, `Commands`
etc. It also has a `Trigger` parameter that provides information about
the trigger such as the target entity, and the event being triggered.
Importantly these systems run during command application which is key
for their future use to keep ECS internals up to date. There are similar
events for `OnInsert` and `OnRemove`, and this will be expanded with
things such as `ArchetypeCreated`, `TableEmpty` etc. in follow up PRs.
Another way to register an observer is an "entity observer" that looks
like this:
```rust
world.entity_mut(entity).observe(|trigger: Trigger<Resize>| {
// ...
});
```
Entity observers run whenever an event of their type is triggered
targeting that specific entity. This type of observer will de-spawn
itself if the entity (or entities) it is observing is ever de-spawned so
as to not leave dangling observers.
Entity observers can also be spawned from deferred contexts such as
other observers, systems, or hooks using commands:
```rust
commands.entity(entity).observe(|trigger: Trigger<Resize>| {
// ...
});
```
Observers are not limited to in built event types, they can be used with
any type that implements `Event` (which has been extended to implement
Component). This means events can also carry data:
```rust
#[derive(Event)]
struct Resize { x: u32, y: u32 }
commands.entity(entity).observe(|trigger: Trigger<Resize>, query: Query<&mut Size>| {
let event = trigger.event();
// ...
});
// Will trigger the observer when commands are applied.
commands.trigger_targets(Resize { x: 10, y: 10 }, entity);
```
You can also trigger events that target more than one entity at a time:
```rust
commands.trigger_targets(Resize { x: 10, y: 10 }, [e1, e2]);
```
Additionally, Observers don't _need_ entity targets:
```rust
app.observe(|trigger: Trigger<Quit>| {
})
commands.trigger(Quit);
```
In these cases, `trigger.entity()` will be a placeholder.
Observers are actually just normal entities with an `ObserverState` and
`Observer` component! The `observe()` functions above are just shorthand
for:
```rust
world.spawn(Observer::new(|trigger: Trigger<Resize>| {});
```
This will spawn the `Observer` system and use an `on_add` hook to add
the `ObserverState` component.
Dynamic components and trigger types are also fully supported allowing
for runtime defined trigger types.
## Possible Follow-ups
1. Deprecate `RemovedComponents`, observers should fulfill all use cases
while being more flexible and performant.
2. Queries as entities: Swap queries to entities and begin using
observers listening to archetype creation triggers to keep their caches
in sync, this allows unification of `ObserverState` and `QueryState` as
well as unlocking several API improvements for `Query` and the
management of `QueryState`.
3. Trigger bubbling: For some UI use cases in particular users are
likely to want some form of bubbling for entity observers, this is
trivial to implement naively but ideally this includes an acceleration
structure to cache hierarchy traversals.
4. All kinds of other in-built trigger types.
5. Optimization; in order to not bloat the complexity of the PR I have
kept the implementation straightforward, there are several areas where
performance can be improved. The focus for this PR is to get the
behavior implemented and not incur a performance cost for users who
don't use observers.
I am leaving each of these to follow up PR's in order to keep each of
them reviewable as this already includes significant changes.
---------
Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
Co-authored-by: MiniaczQ <xnetroidpl@gmail.com>
Co-authored-by: Carter Anderson <mcanders1@gmail.com>
# Objective
- Add a new example showcasing how to add custom primitives and what you
can do with them.
## Solution
- Added a new example `custom_primitives` with a 2D heart shape
primitive highlighting
- `Bounded2d` by implementing and visualising bounding shapes,
- `Measured2d` by implementing it,
- `Meshable` to show the shape on the screen
- The example also includes an `Extrusion<Heart>` implementing
- `Measured3d`,
- `Bounded3d` using the `BoundedExtrusion` trait and
- meshing using the `Extrudable` trait.
## Additional information
Here are two images of the heart and its extrusion:
![image_2024-06-10_194631194](https://github.com/bevyengine/bevy/assets/62256001/53f1836c-df74-4ba6-85e9-fabdafa94c66)
![Screenshot 2024-06-10
194609](https://github.com/bevyengine/bevy/assets/62256001/b1630e71-6e94-4293-b7b5-da8d9cc98faf)
---------
Co-authored-by: Jakub Marcowski <37378746+Chubercik@users.noreply.github.com>
# Objective
Partially address #13408
Rework of #13613
Unify the very nice forms of interpolation specifically present in
`bevy_math` under a shared trait upon which further behavior can be
based.
The ideas in this PR were prompted by [Lerp smoothing is broken by Freya
Holmer](https://www.youtube.com/watch?v=LSNQuFEDOyQ).
## Solution
There is a new trait `StableInterpolate` in `bevy_math::common_traits`
which enshrines a quite-specific notion of interpolation with a lot of
guarantees:
```rust
/// A type with a natural interpolation that provides strong subdivision guarantees.
///
/// Although the only required method is `interpolate_stable`, many things are expected of it:
///
/// 1. The notion of interpolation should follow naturally from the semantics of the type, so
/// that inferring the interpolation mode from the type alone is sensible.
///
/// 2. The interpolation recovers something equivalent to the starting value at `t = 0.0`
/// and likewise with the ending value at `t = 1.0`.
///
/// 3. Importantly, the interpolation must be *subdivision-stable*: for any interpolation curve
/// between two (unnamed) values and any parameter-value pairs `(t0, p)` and `(t1, q)`, the
/// interpolation curve between `p` and `q` must be the *linear* reparametrization of the original
/// interpolation curve restricted to the interval `[t0, t1]`.
///
/// The last of these conditions is very strong and indicates something like constant speed. It
/// is called "subdivision stability" because it guarantees that breaking up the interpolation
/// into segments and joining them back together has no effect.
///
/// Here is a diagram depicting it:
/// ```text
/// top curve = u.interpolate_stable(v, t)
///
/// t0 => p t1 => q
/// |-------------|---------|-------------|
/// 0 => u / \ 1 => v
/// / \
/// / \
/// / linear \
/// / reparametrization \
/// / t = t0 * (1 - s) + t1 * s \
/// / \
/// |-------------------------------------|
/// 0 => p 1 => q
///
/// bottom curve = p.interpolate_stable(q, s)
/// ```
///
/// Note that some common forms of interpolation do not satisfy this criterion. For example,
/// [`Quat::lerp`] and [`Rot2::nlerp`] are not subdivision-stable.
///
/// Furthermore, this is not to be used as a general trait for abstract interpolation.
/// Consumers rely on the strong guarantees in order for behavior based on this trait to be
/// well-behaved.
///
/// [`Quat::lerp`]: crate::Quat::lerp
/// [`Rot2::nlerp`]: crate::Rot2::nlerp
pub trait StableInterpolate: Clone {
/// Interpolate between this value and the `other` given value using the parameter `t`.
/// Note that the parameter `t` is not necessarily clamped to lie between `0` and `1`.
/// When `t = 0.0`, `self` is recovered, while `other` is recovered at `t = 1.0`,
/// with intermediate values lying between the two.
fn interpolate_stable(&self, other: &Self, t: f32) -> Self;
}
```
This trait has a blanket implementation over `NormedVectorSpace`, where
`lerp` is used, along with implementations for `Rot2`, `Quat`, and the
direction types using variants of `slerp`. Other areas may choose to
implement this trait in order to hook into its functionality, but the
stringent requirements must actually be met.
This trait bears no direct relationship with `bevy_animation`'s
`Animatable` trait, although they may choose to use `interpolate_stable`
in their trait implementations if they wish, as both traits involve
type-inferred interpolations of the same kind. `StableInterpolate` is
not a supertrait of `Animatable` for a couple reasons:
1. Notions of interpolation in animation are generally going to be much
more general than those allowed under these constraints.
2. Laying out these generalized interpolation notions is the domain of
`bevy_animation` rather than of `bevy_math`. (Consider also that
inferring interpolation from types is not universally desirable.)
Similarly, this is not implemented on `bevy_color`'s color types,
although their current mixing behavior does meet the conditions of the
trait.
As an aside, the subdivision-stability condition is of interest
specifically for the [Curve
RFC](https://github.com/bevyengine/rfcs/pull/80), where it also ensures
a kind of stability for subsampling.
Importantly, this trait ensures that the "smooth following" behavior
defined in this PR behaves predictably:
```rust
/// Smoothly nudge this value towards the `target` at a given decay rate. The `decay_rate`
/// parameter controls how fast the distance between `self` and `target` decays relative to
/// the units of `delta`; the intended usage is for `decay_rate` to generally remain fixed,
/// while `delta` is something like `delta_time` from an updating system. This produces a
/// smooth following of the target that is independent of framerate.
///
/// More specifically, when this is called repeatedly, the result is that the distance between
/// `self` and a fixed `target` attenuates exponentially, with the rate of this exponential
/// decay given by `decay_rate`.
///
/// For example, at `decay_rate = 0.0`, this has no effect.
/// At `decay_rate = f32::INFINITY`, `self` immediately snaps to `target`.
/// In general, higher rates mean that `self` moves more quickly towards `target`.
///
/// # Example
/// ```
/// # use bevy_math::{Vec3, StableInterpolate};
/// # let delta_time: f32 = 1.0 / 60.0;
/// let mut object_position: Vec3 = Vec3::ZERO;
/// let target_position: Vec3 = Vec3::new(2.0, 3.0, 5.0);
/// // Decay rate of ln(10) => after 1 second, remaining distance is 1/10th
/// let decay_rate = f32::ln(10.0);
/// // Calling this repeatedly will move `object_position` towards `target_position`:
/// object_position.smooth_nudge(&target_position, decay_rate, delta_time);
/// ```
fn smooth_nudge(&mut self, target: &Self, decay_rate: f32, delta: f32) {
self.interpolate_stable_assign(target, 1.0 - f32::exp(-decay_rate * delta));
}
```
As the documentation indicates, the intention is for this to be called
in game update systems, and `delta` would be something like
`Time::delta_seconds` in Bevy, allowing positions, orientations, and so
on to smoothly follow a target. A new example, `smooth_follow`,
demonstrates a basic implementation of this, with a sphere smoothly
following a sharply moving target:
https://github.com/bevyengine/bevy/assets/2975848/7124b28b-6361-47e3-acf7-d1578ebd0347
## Testing
Tested by running the example with various parameters.
# Objective
This PR addresses the 2D part of #12658. I plan to separate the examples
and make one PR per camera example.
## Solution
Added a new top-down example composed of:
- [x] Player keyboard movements
- [x] UI for keyboard instructions
- [x] Colors and bloom effect to see the movement of the player
- [x] Camera smooth movement towards the player (lerp)
## Testing
```bash
cargo run --features="wayland,bevy/dynamic_linking" --example 2d_top_down_camera
```
https://github.com/bevyengine/bevy/assets/10638479/95db0587-e5e0-4f55-be11-97444b795793
# Objective
This PR addresses one of the issues from [discord state
discussion](https://discord.com/channels/691052431525675048/1237949214017716356).
Same-state transitions can be desirable, so there should exist a hook
for them.
Fixes https://github.com/bevyengine/bevy/issues/9130.
## Solution
- Allow `StateTransitionEvent<S>` to contain identity transitions.
- Ignore identity transitions at schedule running level (`OnExit`,
`OnTransition`, `OnEnter`).
- Propagate identity transitions through `SubStates` and
`ComputedStates`.
- Add example about registering custom transition schedules.
## Changelog
- `StateTransitionEvent<S>` can be emitted with same `exited` and
`entered` state.
---------
Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
# Objective
Move `StateScoped` and `log_transitions` to `bevy_state`, since they're
useful for end users.
Addresses #12852, although not in the way the issue had in mind.
## Solution
- Added `bevy_hierarchy` to default features of `bevy_state`.
- Move `log_transitions` to `transitions` module.
- Move `StateScoped` to `state_scoped` module, gated behind
`bevy_hierarchy` feature.
- Refreshed implementation.
- Added `enable_state_coped_entities<S: States>()` to add required
machinery to `App` for clearing state-scoped entities.
## Changelog
- Added `log_transitions` for displaying state transitions.
- Added `StateScoped` for binding entity lifetime to state and app
`enable_state_coped_entities` to register cleaning behavior.
---------
Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
Co-authored-by: François Mockers <francois.mockers@vleue.com>
This commit implements support for physically-based anisotropy in Bevy's
`StandardMaterial`, following the specification for the
[`KHR_materials_anisotropy`] glTF extension.
[*Anisotropy*] (not to be confused with [anisotropic filtering]) is a
PBR feature that allows roughness to vary along the tangent and
bitangent directions of a mesh. In effect, this causes the specular
light to stretch out into lines instead of a round lobe. This is useful
for modeling brushed metal, hair, and similar surfaces. Support for
anisotropy is a common feature in major game and graphics engines;
Unity, Unreal, Godot, three.js, and Blender all support it to varying
degrees.
Two new parameters have been added to `StandardMaterial`:
`anisotropy_strength` and `anisotropy_rotation`. Anisotropy strength,
which ranges from 0 to 1, represents how much the roughness differs
between the tangent and the bitangent of the mesh. In effect, it
controls how stretched the specular highlight is. Anisotropy rotation
allows the roughness direction to differ from the tangent of the model.
In addition to these two fixed parameters, an *anisotropy texture* can
be supplied. Such a texture should be a 3-channel RGB texture, where the
red and green values specify a direction vector using the same
conventions as a normal map ([0, 1] color values map to [-1, 1] vector
values), and the the blue value represents the strength. This matches
the format that the [`KHR_materials_anisotropy`] specification requires.
Such textures should be loaded as linear and not sRGB. Note that this
texture does consume one additional texture binding in the standard
material shader.
The glTF loader has been updated to properly parse the
`KHR_materials_anisotropy` extension.
A new example, `anisotropy`, has been added. This example loads and
displays the barn lamp example from the [`glTF-Sample-Assets`]
repository. Note that the textures were rather large, so I shrunk them
down and converted them to a mixture of JPEG and KTX2 format, in the
interests of saving space in the Bevy repository.
[*Anisotropy*]:
https://google.github.io/filament/Filament.md.html#materialsystem/anisotropicmodel
[anisotropic filtering]:
https://en.wikipedia.org/wiki/Anisotropic_filtering
[`KHR_materials_anisotropy`]:
https://github.com/KhronosGroup/glTF/blob/main/extensions/2.0/Khronos/KHR_materials_anisotropy/README.md
[`glTF-Sample-Assets`]:
https://github.com/KhronosGroup/glTF-Sample-Assets/
## Changelog
### Added
* Physically-based anisotropy is now available for materials, which
enhances the look of surfaces such as brushed metal or hair. glTF scenes
can use the new feature with the `KHR_materials_anisotropy` extension.
## Screenshots
With anisotropy:
![Screenshot 2024-05-20
233414](https://github.com/bevyengine/bevy/assets/157897/379f1e42-24e9-40b6-a430-f7d1479d0335)
Without anisotropy:
![Screenshot 2024-05-20
233420](https://github.com/bevyengine/bevy/assets/157897/aa220f05-b8e7-417c-9671-b242d4bf9fc4)
# Objective
- fixes#4823
## Solution
As outlined in the discussion in the linked issue as the best current
solution, this PR adds specific GltfExtras for
- scenes
- meshes
- materials
- As it is , it is not a breaking change, I hesitated to rename the
current "GltfExtras" component to "PrimitiveGltfExtras", but that would
result in a breaking change and might be a bit confusing as to what
"primitive" that refers to.
## Testing
- I included a bare-bones example & asset (exported gltf file from
Blender) with gltf extras at all the relevant levels : scene, mesh,
material
---
## Changelog
- adds "SceneGltfExtras" injected at the scene level if any
- adds "MeshGltfExtras", injected at the mesh level if any
- adds "MaterialGltfExtras", injected at the mesh level if any: ie if a
mesh has a material that has gltf extras, the component will be injected
there.
# Objective
- Upgrade winit to v0.30
- Fixes https://github.com/bevyengine/bevy/issues/13331
## Solution
This is a rewrite/adaptation of the new trait system described and
implemented in `winit` v0.30.
## Migration Guide
The custom UserEvent is now renamed as WakeUp, used to wake up the loop
if anything happens outside the app (a new
[custom_user_event](https://github.com/bevyengine/bevy/pull/13366/files#diff-2de8c0a8d3028d0059a3d80ae31b2bbc1cde2595ce2d317ea378fe3e0cf6ef2d)
shows this behavior.
The internal `UpdateState` has been removed and replaced internally by
the AppLifecycle. When changed, the AppLifecycle is sent as an event.
The `UpdateMode` now accepts only two values: `Continuous` and
`Reactive`, but the latter exposes 3 new properties to enable reactive
to device, user or window events. The previous `UpdateMode::Reactive` is
now equivalent to `UpdateMode::reactive()`, while
`UpdateMode::ReactiveLowPower` to `UpdateMode::reactive_low_power()`.
The `ApplicationLifecycle` has been renamed as `AppLifecycle`, and now
contains the possible values of the application state inside the event
loop:
* `Idle`: the loop has not started yet
* `Running` (previously called `Started`): the loop is running
* `WillSuspend`: the loop is going to be suspended
* `Suspended`: the loop is suspended
* `WillResume`: the loop is going to be resumed
Note: the `Resumed` state has been removed since the resumed app is just
running.
Finally, now that `winit` enables this, it extends the `WinitPlugin` to
support custom events.
## Test platforms
- [x] Windows
- [x] MacOs
- [x] Linux (x11)
- [x] Linux (Wayland)
- [x] Android
- [x] iOS
- [x] WASM/WebGPU
- [x] WASM/WebGL2
## Outstanding issues / regressions
- [ ] iOS: build failed in CI
- blocking, but may just be flakiness
- [x] Cross-platform: when the window is maximised, changes in the scale
factor don't apply, to make them apply one has to make the window
smaller again. (Re-maximising keeps the updated scale factor)
- non-blocking, but good to fix
- [ ] Android: it's pretty easy to quickly open and close the app and
then the music keeps playing when suspended.
- non-blocking but worrying
- [ ] Web: the application will hang when switching tabs
- Not new, duplicate of https://github.com/bevyengine/bevy/issues/13486
- [ ] Cross-platform?: Screenshot failure, `ERROR present_frames:
wgpu_core::present: No work has been submitted for this frame before`
taking the first screenshot, but after pressing space
- non-blocking, but good to fix
---------
Co-authored-by: François <francois.mockers@vleue.com>
# Objective
- Show + Visually Test that 3D primitive sampling works
- Make an example that looks nice.
## Solution
- Added a `sampling_primitives` examples which shows all the 3D
primitives being sampled, with a firefly aesthetic.
![image](https://github.com/bevyengine/bevy/assets/27301845/f882438b-2c72-48b1-a6e9-162a80c4273e)
## Testing
- `cargo run --example sampling_primitives`
- Haven't tested WASM.
## Changelog
### Added
- Added a new example, `sampling_primitives`, to showcase all the 3D
sampleable primitives.
## Additional notes:
This example borrowed a bunch of code from the other sampling example,
by @mweatherley.
In future updates this example should be updated with new 3D primitives
as they become sampleable.
---------
Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
Co-authored-by: Joona Aalto <jondolf.dev@gmail.com>
# Objective
We introduced a bunch of neat random sampling stuff in this release; we
should do a good job of showing people how to use it, and writing
examples is part of this.
## Solution
A new Math example, `random_sampling`, shows off the `ShapeSample` API
functionality. For the moment, it renders a cube and allows the user to
sample points from its interior or boundary in sets of either 1 or 100:
<img width="1440" alt="Screenshot 2024-05-25 at 1 16 08 PM"
src="https://github.com/bevyengine/bevy/assets/2975848/9cb6f53f-c89a-42c2-8907-b11d294c402a">
On the level of code, these are reflected by two ways of using
`ShapeSample`:
```rust
// Get a single random Vec3:
let sample: Vec3 = match *mode {
Mode::Interior => shape.0.sample_interior(rng),
Mode::Boundary => shape.0.sample_boundary(rng),
};
```
```rust
// Get 100 random Vec3s:
let samples: Vec<Vec3> = match *mode {
Mode::Interior => {
let dist = shape.0.interior_dist();
dist.sample_iter(&mut rng).take(100).collect()
}
Mode::Boundary => {
let dist = shape.0.boundary_dist();
dist.sample_iter(&mut rng).take(100).collect()
}
};
```
## Testing
Run the example!
## Discussion
Maybe in the future it would be nice to show off all of the different
shapes that we have implemented `ShapeSample` for, but I wanted to start
just by demonstrating the functionality. Here, I chose a cube because
it's simple and because it looks good rendered transparently with
backface culling disabled.
This commit, a revamp of #12959, implements screen-space reflections
(SSR), which approximate real-time reflections based on raymarching
through the depth buffer and copying samples from the final rendered
frame. This patch is a relatively minimal implementation of SSR, so as
to provide a flexible base on which to customize and build in the
future. However, it's based on the production-quality [raymarching code
by Tomasz
Stachowiak](https://gist.github.com/h3r2tic/9c8356bdaefbe80b1a22ae0aaee192db).
For a general basic overview of screen-space reflections, see
[1](https://lettier.github.io/3d-game-shaders-for-beginners/screen-space-reflection.html).
The raymarching shader uses the basic algorithm of tracing forward in
large steps, refining that trace in smaller increments via binary
search, and then using the secant method. No temporal filtering or
roughness blurring, is performed at all; for this reason, SSR currently
only operates on very shiny surfaces. No acceleration via the
hierarchical Z-buffer is implemented (though note that
https://github.com/bevyengine/bevy/pull/12899 will add the
infrastructure for this). Reflections are traced at full resolution,
which is often considered slow. All of these improvements and more can
be follow-ups.
SSR is built on top of the deferred renderer and is currently only
supported in that mode. Forward screen-space reflections are possible
albeit uncommon (though e.g. *Doom Eternal* uses them); however, they
require tracing from the previous frame, which would add complexity.
This patch leaves the door open to implementing SSR in the forward
rendering path but doesn't itself have such an implementation.
Screen-space reflections aren't supported in WebGL 2, because they
require sampling from the depth buffer, which Naga can't do because of a
bug (`sampler2DShadow` is incorrectly generated instead of `sampler2D`;
this is the same reason why depth of field is disabled on that
platform).
To add screen-space reflections to a camera, use the
`ScreenSpaceReflectionsBundle` bundle or the
`ScreenSpaceReflectionsSettings` component. In addition to
`ScreenSpaceReflectionsSettings`, `DepthPrepass` and `DeferredPrepass`
must also be present for the reflections to show up. The
`ScreenSpaceReflectionsSettings` component contains several settings
that artists can tweak, and also comes with sensible defaults.
A new example, `ssr`, has been added. It's loosely based on the
[three.js ocean
sample](https://threejs.org/examples/webgl_shaders_ocean.html), but all
the assets are original. Note that the three.js demo has no screen-space
reflections and instead renders a mirror world. In contrast to #12959,
this demo tests not only a cube but also a more complex model (the
flight helmet).
## Changelog
### Added
* Screen-space reflections can be enabled for very smooth surfaces by
adding the `ScreenSpaceReflections` component to a camera. Deferred
rendering must be enabled for the reflections to appear.
![Screenshot 2024-05-18
143555](https://github.com/bevyengine/bevy/assets/157897/b8675b39-8a89-433e-a34e-1b9ee1233267)
![Screenshot 2024-05-18
143606](https://github.com/bevyengine/bevy/assets/157897/cc9e1cd0-9951-464a-9a08-e589210e5606)
# Objective
Adopted #11748
## Solution
I've rebased on main to fix the merge conflicts. ~~Not quite ready to
merge yet~~
* Clippy is happy and the tests are passing, but...
* ~~The new shapes in `examples/2d/2d_shapes.rs` don't look right at
all~~ Never mind, looks like radians and degrees just got mixed up at
some point?
* I have updated one doc comment based on a review in the original PR.
---------
Co-authored-by: Alexis "spectria" Horizon <spectria.limina@gmail.com>
Co-authored-by: Alexis "spectria" Horizon <118812919+spectria-limina@users.noreply.github.com>
Co-authored-by: Joona Aalto <jondolf.dev@gmail.com>
Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
Co-authored-by: Ben Harper <ben@tukom.org>
# Objective
Supercedes #12881 . Added a simple implementation that allows the user
to react to multiple asset loads both synchronously and asynchronously.
## Solution
Added `load_acquire`, that holds an item and drops it when loading is
finished or failed.
When used synchronously
Hold an `Arc<()>`, check for `Arc::strong_count() == 1` when all loading
completed.
When used asynchronously
Hold a `SemaphoreGuard`, await on `acquire_all` for completion.
This implementation has more freedom than the original in my opinion.
---------
Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
Co-authored-by: Zachary Harrold <zac@harrold.com.au>
# Objective
As work on the editor starts to ramp up, it might be nice to start
allowing types to specify custom attributes. These can be used to
provide certain functionality to fields, such as ranges or controlling
how data is displayed.
A good example of this can be seen in
[`bevy-inspector-egui`](https://github.com/jakobhellermann/bevy-inspector-egui)
with its
[`InspectorOptions`](https://docs.rs/bevy-inspector-egui/0.22.1/bevy_inspector_egui/struct.InspectorOptions.html):
```rust
#[derive(Reflect, Default, InspectorOptions)]
#[reflect(InspectorOptions)]
struct Slider {
#[inspector(min = 0.0, max = 1.0)]
value: f32,
}
```
Normally, as demonstrated in the example above, these attributes are
handled by a derive macro and stored in a corresponding `TypeData`
struct (i.e. `ReflectInspectorOptions`).
Ideally, we would have a good way of defining this directly via
reflection so that users don't need to create and manage a whole proc
macro just to allow these sorts of attributes.
And note that this doesn't have to just be for inspectors and editors.
It can be used for things done purely on the code side of things.
## Solution
Create a new method for storing attributes on fields via the `Reflect`
derive.
These custom attributes are stored in type info (e.g. `NamedField`,
`StructInfo`, etc.).
```rust
#[derive(Reflect)]
struct Slider {
#[reflect(@0.0..=1.0)]
value: f64,
}
let TypeInfo::Struct(info) = Slider::type_info() else {
panic!("expected struct info");
};
let field = info.field("value").unwrap();
let range = field.get_attribute::<RangeInclusive<f64>>().unwrap();
assert_eq!(*range, 0.0..=1.0);
```
## TODO
- [x] ~~Bikeshed syntax~~ Went with a type-based approach, prefixed by
`@` for ease of parsing and flexibility
- [x] Add support for custom struct/tuple struct field attributes
- [x] Add support for custom enum variant field attributes
- [x] ~~Add support for custom enum variant attributes (maybe?)~~ ~~Will
require a larger refactor. Can be saved for a future PR if we really
want it.~~ Actually, we apparently still have support for variant
attributes despite not using them, so it was pretty easy to add lol.
- [x] Add support for custom container attributes
- [x] Allow custom attributes to store any reflectable value (not just
`Lit`)
- [x] ~~Store attributes in registry~~ This PR used to store these in
attributes in the registry, however, it has since switched over to
storing them in type info
- [x] Add example
## Bikeshedding
> [!note]
> This section was made for the old method of handling custom
attributes, which stored them by name (i.e. `some_attribute = 123`). The
PR has shifted away from that, to a more type-safe approach.
>
> This section has been left for reference.
There are a number of ways we can syntactically handle custom
attributes. Feel free to leave a comment on your preferred one! Ideally
we want one that is clear, readable, and concise since these will
potentially see _a lot_ of use.
Below is a small, non-exhaustive list of them. Note that the
`skip_serializing` reflection attribute is added to demonstrate how each
case plays with existing reflection attributes.
<details>
<summary>List</summary>
##### 1. `@(name = value)`
> The `@` was chosen to make them stand out from other attributes and
because the "at" symbol is a subtle pneumonic for "attribute". Of
course, other symbols could be used (e.g. `$`, `#`, etc.).
```rust
#[derive(Reflect)]
struct Slider {
#[reflect(@(min = 0.0, max = 1.0), skip_serializing)]
#[[reflect(@(bevy_editor::hint = "Range: 0.0 to 1.0"))]
value: f32,
}
```
##### 2. `@name = value`
> This is my personal favorite.
```rust
#[derive(Reflect)]
struct Slider {
#[reflect(@min = 0.0, @max = 1.0, skip_serializing)]
#[[reflect(@bevy_editor::hint = "Range: 0.0 to 1.0")]
value: f32,
}
```
##### 3. `custom_attr(name = value)`
> `custom_attr` can be anything. Other possibilities include `with` or
`tag`.
```rust
#[derive(Reflect)]
struct Slider {
#[reflect(custom_attr(min = 0.0, max = 1.0), skip_serializing)]
#[[reflect(custom_attr(bevy_editor::hint = "Range: 0.0 to 1.0"))]
value: f32,
}
```
##### 4. `reflect_attr(name = value)`
```rust
#[derive(Reflect)]
struct Slider {
#[reflect(skip_serializing)]
#[reflect_attr(min = 0.0, max = 1.0)]
#[[reflect_attr(bevy_editor::hint = "Range: 0.0 to 1.0")]
value: f32,
}
```
</details>
---
## Changelog
- Added support for custom attributes on reflected types (i.e.
`#[reflect(@Foo::new("bar")]`)
# Objective
- Fixes#12377
## Solution
Added simple `#[diagnostic::on_unimplemented(...)]` attributes to some
critical public traits providing a more approachable initial error
message. Where appropriate, a `note` is added indicating that a `derive`
macro is available.
## Examples
<details>
<summary>Examples hidden for brevity</summary>
Below is a collection of examples showing the new error messages
produced by this change. In general, messages will start with a more
Bevy-centric error message (e.g., _`MyComponent` is not a `Component`_),
and a note directing the user to an available derive macro where
appropriate.
### Missing `#[derive(Resource)]`
<details>
<summary>Example Code</summary>
```rust
use bevy::prelude::*;
struct MyResource;
fn main() {
App::new()
.insert_resource(MyResource)
.run();
}
```
</details>
<details>
<summary>Error Generated</summary>
```error
error[E0277]: `MyResource` is not a `Resource`
--> examples/app/empty.rs:7:26
|
7 | .insert_resource(MyResource)
| --------------- ^^^^^^^^^^ invalid `Resource`
| |
| required by a bound introduced by this call
|
= help: the trait `Resource` is not implemented for `MyResource`
= note: consider annotating `MyResource` with `#[derive(Resource)]`
= help: the following other types implement trait `Resource`:
AccessibilityRequested
ManageAccessibilityUpdates
bevy::bevy_a11y::Focus
DiagnosticsStore
FrameCount
bevy::prelude::State<S>
SystemInfo
bevy::prelude::Axis<T>
and 141 others
note: required by a bound in `bevy::prelude::App::insert_resource`
--> C:\Users\Zac\Documents\GitHub\bevy\crates\bevy_app\src\app.rs:419:31
|
419 | pub fn insert_resource<R: Resource>(&mut self, resource: R) -> &mut Self {
| ^^^^^^^^ required by this bound in `App::insert_resource`
```
</details>
### Putting A `QueryData` in a `QueryFilter` Slot
<details>
<summary>Example Code</summary>
```rust
use bevy::prelude::*;
#[derive(Component)]
struct A;
#[derive(Component)]
struct B;
fn my_system(_query: Query<&A, &B>) {}
fn main() {
App::new()
.add_systems(Update, my_system)
.run();
}
```
</details>
<details>
<summary>Error Generated</summary>
```error
error[E0277]: `&B` is not a valid `Query` filter
--> examples/app/empty.rs:9:22
|
9 | fn my_system(_query: Query<&A, &B>) {}
| ^^^^^^^^^^^^^ invalid `Query` filter
|
= help: the trait `QueryFilter` is not implemented for `&B`
= help: the following other types implement trait `QueryFilter`:
With<T>
Without<T>
bevy::prelude::Or<()>
bevy::prelude::Or<(F0,)>
bevy::prelude::Or<(F0, F1)>
bevy::prelude::Or<(F0, F1, F2)>
bevy::prelude::Or<(F0, F1, F2, F3)>
bevy::prelude::Or<(F0, F1, F2, F3, F4)>
and 28 others
note: required by a bound in `bevy::prelude::Query`
--> C:\Users\Zac\Documents\GitHub\bevy\crates\bevy_ecs\src\system\query.rs:349:51
|
349 | pub struct Query<'world, 'state, D: QueryData, F: QueryFilter = ()> {
| ^^^^^^^^^^^ required by this bound in `Query`
```
</details>
### Missing `#[derive(Component)]`
<details>
<summary>Example Code</summary>
```rust
use bevy::prelude::*;
struct A;
fn my_system(mut commands: Commands) {
commands.spawn(A);
}
fn main() {
App::new()
.add_systems(Startup, my_system)
.run();
}
```
</details>
<details>
<summary>Error Generated</summary>
```error
error[E0277]: `A` is not a `Bundle`
--> examples/app/empty.rs:6:20
|
6 | commands.spawn(A);
| ----- ^ invalid `Bundle`
| |
| required by a bound introduced by this call
|
= help: the trait `bevy::prelude::Component` is not implemented for `A`, which is required by `A: Bundle`
= note: consider annotating `A` with `#[derive(Component)]` or `#[derive(Bundle)]`
= help: the following other types implement trait `Bundle`:
TransformBundle
SceneBundle
DynamicSceneBundle
AudioSourceBundle<Source>
SpriteBundle
SpriteSheetBundle
Text2dBundle
MaterialMesh2dBundle<M>
and 34 others
= note: required for `A` to implement `Bundle`
note: required by a bound in `bevy::prelude::Commands::<'w, 's>::spawn`
--> C:\Users\Zac\Documents\GitHub\bevy\crates\bevy_ecs\src\system\commands\mod.rs:243:21
|
243 | pub fn spawn<T: Bundle>(&mut self, bundle: T) -> EntityCommands {
| ^^^^^^ required by this bound in `Commands::<'w, 's>::spawn`
```
</details>
### Missing `#[derive(Asset)]`
<details>
<summary>Example Code</summary>
```rust
use bevy::prelude::*;
struct A;
fn main() {
App::new()
.init_asset::<A>()
.run();
}
```
</details>
<details>
<summary>Error Generated</summary>
```error
error[E0277]: `A` is not an `Asset`
--> examples/app/empty.rs:7:23
|
7 | .init_asset::<A>()
| ---------- ^ invalid `Asset`
| |
| required by a bound introduced by this call
|
= help: the trait `Asset` is not implemented for `A`
= note: consider annotating `A` with `#[derive(Asset)]`
= help: the following other types implement trait `Asset`:
Font
AnimationGraph
DynamicScene
Scene
AudioSource
Pitch
bevy::bevy_gltf::Gltf
GltfNode
and 17 others
note: required by a bound in `init_asset`
--> C:\Users\Zac\Documents\GitHub\bevy\crates\bevy_asset\src\lib.rs:307:22
|
307 | fn init_asset<A: Asset>(&mut self) -> &mut Self;
| ^^^^^ required by this bound in `AssetApp::init_asset`
```
</details>
### Mismatched Input and Output on System Piping
<details>
<summary>Example Code</summary>
```rust
use bevy::prelude::*;
fn producer() -> u32 {
123
}
fn consumer(_: In<u16>) {}
fn main() {
App::new()
.add_systems(Update, producer.pipe(consumer))
.run();
}
```
</details>
<details>
<summary>Error Generated</summary>
```error
error[E0277]: `fn(bevy::prelude::In<u16>) {consumer}` is not a valid system with input `u32` and output `_`
--> examples/app/empty.rs:11:44
|
11 | .add_systems(Update, producer.pipe(consumer))
| ---- ^^^^^^^^ invalid system
| |
| required by a bound introduced by this call
|
= help: the trait `bevy::prelude::IntoSystem<u32, _, _>` is not implemented for fn item `fn(bevy::prelude::In<u16>) {consumer}`
= note: expecting a system which consumes `u32` and produces `_`
note: required by a bound in `pipe`
--> C:\Users\Zac\Documents\GitHub\bevy\crates\bevy_ecs\src\system\mod.rs:168:12
|
166 | fn pipe<B, Final, MarkerB>(self, system: B) -> PipeSystem<Self::System, B::System>
| ---- required by a bound in this associated function
167 | where
168 | B: IntoSystem<Out, Final, MarkerB>,
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ required by this bound in `IntoSystem::pipe`
```
</details>
### Missing Reflection
<details>
<summary>Example Code</summary>
```rust
use bevy::prelude::*;
#[derive(Component)]
struct MyComponent;
fn main() {
App::new()
.register_type::<MyComponent>()
.run();
}
```
</details>
<details>
<summary>Error Generated</summary>
```error
error[E0277]: `MyComponent` does not provide type registration information
--> examples/app/empty.rs:8:26
|
8 | .register_type::<MyComponent>()
| ------------- ^^^^^^^^^^^ the trait `GetTypeRegistration` is not implemented for `MyComponent`
| |
| required by a bound introduced by this call
|
= note: consider annotating `MyComponent` with `#[derive(Reflect)]`
= help: the following other types implement trait `GetTypeRegistration`:
bool
char
isize
i8
i16
i32
i64
i128
and 443 others
note: required by a bound in `bevy::prelude::App::register_type`
--> C:\Users\Zac\Documents\GitHub\bevy\crates\bevy_app\src\app.rs:619:29
|
619 | pub fn register_type<T: bevy_reflect::GetTypeRegistration>(&mut self) -> &mut Self {
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ required by this bound in `App::register_type`
```
</details>
### Missing `#[derive(States)]` Implementation
<details>
<summary>Example Code</summary>
```rust
use bevy::prelude::*;
#[derive(Debug, Clone, Copy, Default, Eq, PartialEq, Hash)]
enum AppState {
#[default]
Menu,
InGame {
paused: bool,
turbo: bool,
},
}
fn main() {
App::new()
.init_state::<AppState>()
.run();
}
```
</details>
<details>
<summary>Error Generated</summary>
```error
error[E0277]: the trait bound `AppState: FreelyMutableState` is not satisfied
--> examples/app/empty.rs:15:23
|
15 | .init_state::<AppState>()
| ---------- ^^^^^^^^ the trait `FreelyMutableState` is not implemented for `AppState`
| |
| required by a bound introduced by this call
|
= note: consider annotating `AppState` with `#[derive(States)]`
note: required by a bound in `bevy::prelude::App::init_state`
--> C:\Users\Zac\Documents\GitHub\bevy\crates\bevy_app\src\app.rs:282:26
|
282 | pub fn init_state<S: FreelyMutableState + FromWorld>(&mut self) -> &mut Self {
| ^^^^^^^^^^^^^^^^^^ required by this bound in `App::init_state`
```
</details>
### Adding a `System` with Unhandled Output
<details>
<summary>Example Code</summary>
```rust
use bevy::prelude::*;
fn producer() -> u32 {
123
}
fn main() {
App::new()
.add_systems(Update, consumer)
.run();
}
```
</details>
<details>
<summary>Error Generated</summary>
```error
error[E0277]: `fn() -> u32 {producer}` does not describe a valid system configuration
--> examples/app/empty.rs:9:30
|
9 | .add_systems(Update, producer)
| ----------- ^^^^^^^^ invalid system configuration
| |
| required by a bound introduced by this call
|
= help: the trait `IntoSystem<(), (), _>` is not implemented for fn item `fn() -> u32 {producer}`, which is required by `fn() -> u32 {producer}: IntoSystemConfigs<_>`
= help: the following other types implement trait `IntoSystemConfigs<Marker>`:
<Box<(dyn bevy::prelude::System<In = (), Out = ()> + 'static)> as IntoSystemConfigs<()>>
<NodeConfigs<Box<(dyn bevy::prelude::System<In = (), Out = ()> + 'static)>> as IntoSystemConfigs<()>>
<(S0,) as IntoSystemConfigs<(SystemConfigTupleMarker, P0)>>
<(S0, S1) as IntoSystemConfigs<(SystemConfigTupleMarker, P0, P1)>>
<(S0, S1, S2) as IntoSystemConfigs<(SystemConfigTupleMarker, P0, P1, P2)>>
<(S0, S1, S2, S3) as IntoSystemConfigs<(SystemConfigTupleMarker, P0, P1, P2, P3)>>
<(S0, S1, S2, S3, S4) as IntoSystemConfigs<(SystemConfigTupleMarker, P0, P1, P2, P3, P4)>>
<(S0, S1, S2, S3, S4, S5) as IntoSystemConfigs<(SystemConfigTupleMarker, P0, P1, P2, P3, P4, P5)>>
and 14 others
= note: required for `fn() -> u32 {producer}` to implement `IntoSystemConfigs<_>`
note: required by a bound in `bevy::prelude::App::add_systems`
--> C:\Users\Zac\Documents\GitHub\bevy\crates\bevy_app\src\app.rs:342:23
|
339 | pub fn add_systems<M>(
| ----------- required by a bound in this associated function
...
342 | systems: impl IntoSystemConfigs<M>,
| ^^^^^^^^^^^^^^^^^^^^ required by this bound in `App::add_systems`
```
</details>
</details>
## Testing
CI passed locally.
## Migration Guide
Upgrade to version 1.78 (or higher) of Rust.
## Future Work
- Currently, hints are not supported in this diagnostic. Ideally,
suggestions like _"consider using ..."_ would be in a hint rather than a
note, but that is the best option for now.
- System chaining and other `all_tuples!(...)`-based traits have bad
error messages due to the slightly different error message format.
---------
Co-authored-by: Jamie Ridding <Themayu@users.noreply.github.com>
Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
Co-authored-by: BD103 <59022059+BD103@users.noreply.github.com>
This commit implements a more physically-accurate, but slower, form of
fog than the `bevy_pbr::fog` module does. Notably, this *volumetric fog*
allows for light beams from directional lights to shine through,
creating what is known as *light shafts* or *god rays*.
To add volumetric fog to a scene, add `VolumetricFogSettings` to the
camera, and add `VolumetricLight` to directional lights that you wish to
be volumetric. `VolumetricFogSettings` has numerous settings that allow
you to define the accuracy of the simulation, as well as the look of the
fog. Currently, only interaction with directional lights that have
shadow maps is supported. Note that the overhead of the effect scales
directly with the number of directional lights in use, so apply
`VolumetricLight` sparingly for the best results.
The overall algorithm, which is implemented as a postprocessing effect,
is a combination of the techniques described in [Scratchapixel] and
[this blog post]. It uses raymarching in screen space, transformed into
shadow map space for sampling and combined with physically-based
modeling of absorption and scattering. Bevy employs the widely-used
[Henyey-Greenstein phase function] to model asymmetry; this essentially
allows light shafts to fade into and out of existence as the user views
them.
Volumetric rendering is a huge subject, and I deliberately kept the
scope of this commit small. Possible follow-ups include:
1. Raymarching at a lower resolution.
2. A post-processing blur (especially useful when combined with (1)).
3. Supporting point lights and spot lights.
4. Supporting lights with no shadow maps.
5. Supporting irradiance volumes and reflection probes.
6. Voxel components that reuse the volumetric fog code to create voxel
shapes.
7. *Horizon: Zero Dawn*-style clouds.
These are all useful, but out of scope of this patch for now, to keep
things tidy and easy to review.
A new example, `volumetric_fog`, has been added to demonstrate the
effect.
## Changelog
### Added
* A new component, `VolumetricFog`, is available, to allow for a more
physically-accurate, but more resource-intensive, form of fog.
* A new component, `VolumetricLight`, can be placed on directional
lights to make them interact with `VolumetricFog`. Notably, this allows
such lights to emit light shafts/god rays.
![Screenshot 2024-04-21
162808](https://github.com/bevyengine/bevy/assets/157897/7a1fc81d-eed5-4735-9419-286c496391a9)
![Screenshot 2024-04-21
132005](https://github.com/bevyengine/bevy/assets/157897/e6d3b5ca-8f59-488d-a3de-15e95aaf4995)
[Scratchapixel]:
https://www.scratchapixel.com/lessons/3d-basic-rendering/volume-rendering-for-developers/intro-volume-rendering.html
[this blog post]: https://www.alexandre-pestana.com/volumetric-lights/
[Henyey-Greenstein phase function]:
https://www.pbr-book.org/4ed/Volume_Scattering/Phase_Functions#TheHenyeyndashGreensteinPhaseFunction
This commit implements the [depth of field] effect, simulating the blur
of objects out of focus of the virtual lens. Either the [hexagonal
bokeh] effect or a faster Gaussian blur may be used. In both cases, the
implementation is a simple separable two-pass convolution. This is not
the most physically-accurate real-time bokeh technique that exists;
Unreal Engine has [a more accurate implementation] of "cinematic depth
of field" from 2018. However, it's simple, and most engines provide
something similar as a fast option, often called "mobile" depth of
field.
The general approach is outlined in [a blog post from 2017]. We take
advantage of the fact that both Gaussian blurs and hexagonal bokeh blurs
are *separable*. This means that their 2D kernels can be reduced to a
small number of 1D kernels applied one after another, asymptotically
reducing the amount of work that has to be done. Gaussian blurs can be
accomplished by blurring horizontally and then vertically, while
hexagonal bokeh blurs can be done with a vertical blur plus a diagonal
blur, plus two diagonal blurs. In both cases, only two passes are
needed. Bokeh requires the first pass to have a second render target and
requires two subpasses in the second pass, which decreases its
performance relative to the Gaussian blur.
The bokeh blur is generally more aesthetically pleasing than the
Gaussian blur, as it simulates the effect of a camera more accurately.
The shape of the bokeh circles are determined by the number of blades of
the aperture. In our case, we use a hexagon, which is usually considered
specific to lower-quality cameras. (This is a downside of the fast
hexagon approach compared to the higher-quality approaches.) The blur
amount is generally specified by the [f-number], which we use to compute
the focal length from the film size and FOV. By default, we simulate
standard cinematic cameras of f/1 and [Super 35]. The developer can
customize these values as desired.
A new example has been added to demonstrate depth of field. It allows
customization of the mode (Gaussian vs. bokeh), focal distance and
f-numbers. The test scene is inspired by a [blog post on depth of field
in Unity]; however, the effect is implemented in a completely different
way from that blog post, and all the assets (textures, etc.) are
original.
Bokeh depth of field:
![Screenshot 2024-04-17
152535](https://github.com/bevyengine/bevy/assets/157897/702f0008-1c8a-4cf3-b077-4110f8c46584)
Gaussian depth of field:
![Screenshot 2024-04-17
152542](https://github.com/bevyengine/bevy/assets/157897/f4ece47a-520e-4483-a92d-f4fa760795d3)
No depth of field:
![Screenshot 2024-04-17
152547](https://github.com/bevyengine/bevy/assets/157897/9444e6aa-fcae-446c-b66b-89469f1a1325)
[depth of field]: https://en.wikipedia.org/wiki/Depth_of_field
[hexagonal bokeh]:
https://colinbarrebrisebois.com/2017/04/18/hexagonal-bokeh-blur-revisited/
[a more accurate implementation]:
https://epicgames.ent.box.com/s/s86j70iamxvsuu6j35pilypficznec04
[a blog post from 2017]:
https://colinbarrebrisebois.com/2017/04/18/hexagonal-bokeh-blur-revisited/
[f-number]: https://en.wikipedia.org/wiki/F-number
[Super 35]: https://en.wikipedia.org/wiki/Super_35
[blog post on depth of field in Unity]:
https://catlikecoding.com/unity/tutorials/advanced-rendering/depth-of-field/
## Changelog
### Added
* A depth of field postprocessing effect is now available, to simulate
objects being out of focus of the camera. To use it, add
`DepthOfFieldSettings` to an entity containing a `Camera3d` component.
---------
Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
Co-authored-by: Bram Buurlage <brambuurlage@gmail.com>
# Objective
Extracts the state mechanisms into a new crate called "bevy_state".
This comes with a few goals:
- state wasn't really an inherent machinery of the ecs system, and so
keeping it within bevy_ecs felt forced
- by mixing it in with bevy_ecs, the maintainability of our more robust
state system was significantly compromised
moving state into a new crate makes it easier to encapsulate as it's own
feature, and easier to read and understand since it's no longer a
single, massive file.
## Solution
move the state-related elements from bevy_ecs to a new crate
## Testing
- Did you test these changes? If so, how? all the automated tests
migrated and passed, ran the pre-existing examples without changes to
validate.
---
## Migration Guide
Since bevy_state is now gated behind the `bevy_state` feature, projects
that use state but don't use the `default-features` will need to add
that feature flag.
Since it is no longer part of bevy_ecs, projects that use bevy_ecs
directly will need to manually pull in `bevy_state`, trigger the
StateTransition schedule, and handle any of the elements that bevy_app
currently sets up.
---------
Co-authored-by: Kristoffer Søholm <k.soeholm@gmail.com>
# Objective
Fixes#12966
## Solution
Renaming multi_threaded feature to match snake case
## Migration Guide
Bevy feature multi-threaded should be refered to multi_threaded from now
on.
Clearcoat is a separate material layer that represents a thin
translucent layer of a material. Examples include (from the [Filament
spec]) car paint, soda cans, and lacquered wood. This commit implements
support for clearcoat following the Filament and Khronos specifications,
marking the beginnings of support for multiple PBR layers in Bevy.
The [`KHR_materials_clearcoat`] specification describes the clearcoat
support in glTF. In Blender, applying a clearcoat to the Principled BSDF
node causes the clearcoat settings to be exported via this extension. As
of this commit, Bevy parses and reads the extension data when present in
glTF. Note that the `gltf` crate has no support for
`KHR_materials_clearcoat`; this patch therefore implements the JSON
semantics manually.
Clearcoat is integrated with `StandardMaterial`, but the code is behind
a series of `#ifdef`s that only activate when clearcoat is present.
Additionally, the `pbr_feature_layer_material_textures` Cargo feature
must be active in order to enable support for clearcoat factor maps,
clearcoat roughness maps, and clearcoat normal maps. This approach
mirrors the same pattern used by the existing transmission feature and
exists to avoid running out of texture bindings on platforms like WebGL
and WebGPU. Note that constant clearcoat factors and roughness values
*are* supported in the browser; only the relatively-less-common maps are
disabled on those platforms.
This patch refactors the lighting code in `StandardMaterial`
significantly in order to better support multiple layers in a natural
way. That code was due for a refactor in any case, so this is a nice
improvement.
A new demo, `clearcoat`, has been added. It's based on [the
corresponding three.js demo], but all the assets (aside from the skybox
and environment map) are my original work.
[Filament spec]:
https://google.github.io/filament/Filament.html#materialsystem/clearcoatmodel
[`KHR_materials_clearcoat`]:
https://github.com/KhronosGroup/glTF/blob/main/extensions/2.0/Khronos/KHR_materials_clearcoat/README.md
[the corresponding three.js demo]:
https://threejs.org/examples/webgl_materials_physical_clearcoat.html
![Screenshot 2024-04-19
101143](https://github.com/bevyengine/bevy/assets/157897/3444bcb5-5c20-490c-b0ad-53759bd47ae2)
![Screenshot 2024-04-19
102054](https://github.com/bevyengine/bevy/assets/157897/6e953944-75b8-49ef-bc71-97b0a53b3a27)
## Changelog
### Added
* `StandardMaterial` now supports a clearcoat layer, which represents a
thin translucent layer over an underlying material.
* The glTF loader now supports the `KHR_materials_clearcoat` extension,
representing materials with clearcoat layers.
## Migration Guide
* The lighting functions in the `pbr_lighting` WGSL module now have
clearcoat parameters, if `STANDARD_MATERIAL_CLEARCOAT` is defined.
* The `R` reflection vector parameter has been removed from some
lighting functions, as it was unused.
# Objective
Dynamic types can be tricky to understand and work with in bevy_reflect.
There should be an example that shows what they are and how they're
used.
## Solution
Add a `Dynamic Types` reflection example.
I'm planning to go through the reflection examples, adding new ones and
updating old ones. And I think this walkthrough style tends to work
best. Due to the amount of text and associated explanation, it might fit
better in a dedicated reflection chapter of the WIP Bevy Book. However,
I think it might be valuable to have some public-facing tutorials for
these concepts.
Let me know if there any thoughts or critiques with the example— both in
content and this overall structure!
## Testing
To test these changes, you can run the example locally:
```
cargo run --example dynamic_types
```
---
## Changelog
- Add `Dynamic Types` reflection example
# Objective
- Add auto exposure/eye adaptation to the bevy render pipeline.
- Support features that users might expect from other engines:
- Metering masks
- Compensation curves
- Smooth exposure transitions
This PR is based on an implementation I already built for a personal
project before https://github.com/bevyengine/bevy/pull/8809 was
submitted, so I wasn't able to adopt that PR in the proper way. I've
still drawn inspiration from it, so @fintelia should be credited as
well.
## Solution
An auto exposure compute shader builds a 64 bin histogram of the scene's
luminance, and then adjusts the exposure based on that histogram. Using
a histogram allows the system to ignore outliers like shadows and
specular highlights, and it allows to give more weight to certain areas
based on a mask.
---
## Changelog
- Added: AutoExposure plugin that allows to adjust a camera's exposure
based on it's scene's luminance.
---------
Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
# Objective
- Follow-up of #13184 :)
- We use `ui_test` to test compiler errors for our custom macros.
- There are four crates related to compile fail tests
- `bevy_ecs_compile_fail_tests`, `bevy_macros_compile_fail_tests`, and
`bevy_reflect_compile_fail_tests`, which actually test the macros.
-
[`bevy_compile_test_utils`](64c1c65783/crates/bevy_compile_test_utils),
which provides helpers and common patterns for these tests.
- All of these crates reside within the `crates` directory.
- This can be confusing, especially for newcomers. All of the other
folders in `crates` are actual published libraries, except for these 4.
## Solution
- Move all compile fail tests to a `compile_fail` folder under their
corresponding crate.
- E.g. `crates/bevy_ecs_compile_fail_tests` would be moved to
`crates/bevy_ecs/compile_fail`.
- Move `bevy_compile_test_utils` to `tools/compile_fail_utils`.
There are a few benefits to this approach:
1. An internal testing detail is less intrusive (and confusing) for
those who just want to browse the public Bevy interface.
2. Follows a pre-existing approach of organizing related crates inside a
larger crate's folder.
- See `bevy_gizmos/macros` for an example.
4. Makes consistent the terms `compile_test`, `compile_fail`, and
`compile_fail_test` in code. It's all just `compile_fail` now, because
we are specifically testing the error messages on compiler failures.
- To be clear it can still be referred to by these terms in comments and
speech, just the names of the crates and the CI command are now
consistent.
## Testing
Run the compile fail CI command:
```shell
cargo run -p ci -- compile-fail
```
If it still passes, then my refactor was successful.
Implement visibility ranges, also known as hierarchical levels of detail
(HLODs).
This commit introduces a new component, `VisibilityRange`, which allows
developers to specify camera distances in which meshes are to be shown
and hidden. Hiding meshes happens early in the rendering pipeline, so
this feature can be used for level of detail optimization. Additionally,
this feature is properly evaluated per-view, so different views can show
different levels of detail.
This feature differs from proper mesh LODs, which can be implemented
later. Engines generally implement true mesh LODs later in the pipeline;
they're typically more efficient than HLODs with GPU-driven rendering.
However, mesh LODs are more limited than HLODs, because they require the
lower levels of detail to be meshes with the same vertex layout and
shader (and perhaps the same material) as the original mesh. Games often
want to use objects other than meshes to replace distant models, such as
*octahedral imposters* or *billboard imposters*.
The reason why the feature is called *hierarchical level of detail* is
that HLODs can replace multiple meshes with a single mesh when the
camera is far away. This can be useful for reducing drawcall count. Note
that `VisibilityRange` doesn't automatically propagate down to children;
it must be placed on every mesh.
Crossfading between different levels of detail is supported, using the
standard 4x4 ordered dithering pattern from [1]. The shader code to
compute the dithering patterns should be well-optimized. The dithering
code is only active when visibility ranges are in use for the mesh in
question, so that we don't lose early Z.
Cascaded shadow maps show the HLOD level of the view they're associated
with. Point light and spot light shadow maps, which have no CSMs,
display all HLOD levels that are visible in any view. To support this
efficiently and avoid doing visibility checks multiple times, we
precalculate all visible HLOD levels for each entity with a
`VisibilityRange` during the `check_visibility_range` system.
A new example, `visibility_range`, has been added to the tree, as well
as a new low-poly version of the flight helmet model to go with it. It
demonstrates use of the visibility range feature to provide levels of
detail.
[1]: https://en.wikipedia.org/wiki/Ordered_dithering#Threshold_map
[^1]: Unreal doesn't have a feature that exactly corresponds to
visibility ranges, but Unreal's HLOD system serves roughly the same
purpose.
## Changelog
### Added
* A new `VisibilityRange` component is available to conditionally enable
entity visibility at camera distances, with optional crossfade support.
This can be used to implement different levels of detail (LODs).
## Screenshots
High-poly model:
![Screenshot 2024-04-09
185541](https://github.com/bevyengine/bevy/assets/157897/7e8be017-7187-4471-8866-974e2d8f2623)
Low-poly model up close:
![Screenshot 2024-04-09
185546](https://github.com/bevyengine/bevy/assets/157897/429603fe-6bb7-4246-8b4e-b4888fd1d3a0)
Crossfading between the two:
![Screenshot 2024-04-09
185604](https://github.com/bevyengine/bevy/assets/157897/86d0d543-f8f3-49ec-8fe5-caa4d0784fd4)
---------
Co-authored-by: Carter Anderson <mcanders1@gmail.com>
# Objective
- #12755 introduced the need to download a file to run an example
- This means the example fails to run in CI without downloading that
file
## Solution
- Add a new metadata to examples "setup" that provides setup
instructions
- Replace the URL in the meshlet example to one that can actually be
downloaded
- example-showcase execute the setup before running an example
## Summary/Description
This PR extends states to allow support for a wider variety of state
types and patterns, by providing 3 distinct types of state:
- Standard [`States`] can only be changed by manually setting the
[`NextState<S>`] resource. These states are the baseline on which the
other state types are built, and can be used on their own for many
simple patterns. See the [state
example](https://github.com/bevyengine/bevy/blob/latest/examples/ecs/state.rs)
for a simple use case - these are the states that existed so far in
Bevy.
- [`SubStates`] are children of other states - they can be changed
manually using [`NextState<S>`], but are removed from the [`World`] if
the source states aren't in the right state. See the [sub_states
example](https://github.com/lee-orr/bevy/blob/derived_state/examples/ecs/sub_states.rs)
for a simple use case based on the derive macro, or read the trait docs
for more complex scenarios.
- [`ComputedStates`] are fully derived from other states - they provide
a [`compute`](ComputedStates::compute) method that takes in the source
states and returns their derived value. They are particularly useful for
situations where a simplified view of the source states is necessary -
such as having an `InAMenu` computed state derived from a source state
that defines multiple distinct menus. See the [computed state
example](https://github.com/lee-orr/bevy/blob/derived_state/examples/ecs/computed_states.rscomputed_states.rs)
to see a sampling of uses for these states.
# Objective
This PR is another attempt at allowing Bevy to better handle complex
state objects in a manner that doesn't rely on strict equality. While my
previous attempts (https://github.com/bevyengine/bevy/pull/10088 and
https://github.com/bevyengine/bevy/pull/9957) relied on complex matching
capacities at the point of adding a system to application, this one
instead relies on deterministically deriving simple states from more
complex ones.
As a result, it does not require any special macros, nor does it change
any other interactions with the state system once you define and add
your derived state. It also maintains a degree of distinction between
`State` and just normal application state - your derivations have to end
up being discreet pre-determined values, meaning there is less of a
risk/temptation to place a significant amount of logic and data within a
given state.
### Addition - Sub States
closes#9942
After some conversation with Maintainers & SMEs, a significant concern
was that people might attempt to use this feature as if it were
sub-states, and find themselves unable to use it appropriately. Since
`ComputedState` is mainly a state matching feature, while `SubStates`
are more of a state mutation related feature - but one that is easy to
add with the help of the machinery introduced by `ComputedState`, it was
added here as well. The relevant discussion is here:
https://discord.com/channels/691052431525675048/1200556329803186316
## Solution
closes#11358
The solution is to create a new type of state - one implementing
`ComputedStates` - which is deterministically tied to one or more other
states. Implementors write a function to transform the source states
into the computed state, and it gets triggered whenever one of the
source states changes.
In addition, we added the `FreelyMutableState` trait , which is
implemented as part of the derive macro for `States`. This allows us to
limit use of `NextState<S>` to states that are actually mutable,
preventing mis-use of `ComputedStates`.
---
## Changelog
- Added `ComputedStates` trait
- Added `FreelyMutableState` trait
- Converted `NextState` resource to an Enum, with `Unchanged` and
`Pending`
- Added `App::add_computed_state::<S: ComputedStates>()`, to allow for
easily adding derived states to an App.
- Moved the `StateTransition` schedule label from `bevy_app` to
`bevy_ecs` - but maintained the export in `bevy_app` for continuity.
- Modified the process for updating states. Instead of just having an
`apply_state_transition` system that can be added anywhere, we now have
a multi-stage process that has to run within the `StateTransition`
label. First, all the state changes are calculated - manual transitions
rely on `apply_state_transition`, while computed transitions run their
computation process before both call `internal_apply_state_transition`
to apply the transition, send out the transition event, trigger
dependent states, and record which exit/transition/enter schedules need
to occur. Once all the states have been updated, the transition
schedules are called - first the exit schedules, then transition
schedules and finally enter schedules.
- Added `SubStates` trait
- Adjusted `apply_state_transition` to be a no-op if the `State<S>`
resource doesn't exist
## Migration Guide
If the user accessed the NextState resource's value directly or created
them from scratch they will need to adjust to use the new enum variants:
- if they created a `NextState(Some(S))` - they should now use
`NextState::Pending(S)`
- if they created a `NextState(None)` -they should now use
`NextState::Unchanged`
- if they matched on the `NextState` value, they would need to make the
adjustments above
If the user manually utilized `apply_state_transition`, they should
instead use systems that trigger the `StateTransition` schedule.
---
## Future Work
There is still some future potential work in the area, but I wanted to
keep these potential features and changes separate to keep the scope
here contained, and keep the core of it easy to understand and use.
However, I do want to note some of these things, both as inspiration to
others and an illustration of what this PR could unlock.
- `NextState::Remove` - Now that the `State` related mechanisms all
utilize options (#11417), it's fairly easy to add support for explicit
state removal. And while `ComputedStates` can add and remove themselves,
right now `FreelyMutableState`s can't be removed from within the state
system. While it existed originally in this PR, it is a different
question with a separate scope and usability concerns - so having it as
it's own future PR seems like the best approach. This feature currently
lives in a separate branch in my fork, and the differences between it
and this PR can be seen here: https://github.com/lee-orr/bevy/pull/5
- `NextState::ReEnter` - this would allow you to trigger exit & entry
systems for the current state type. We can potentially also add a
`NextState::ReEnterRecirsive` to also re-trigger any states that depend
on the current one.
- More mechanisms for `State` updates - This PR would finally make
states that aren't a set of exclusive Enums useful, and with that comes
the question of setting state more effectively. Right now, to update a
state you either need to fully create the new state, or include the
`Res<Option<State<S>>>` resource in your system, clone the state, mutate
it, and then use `NextState.set(my_mutated_state)` to make it the
pending next state. There are a few other potential methods that could
be implemented in future PRs:
- Inverse Compute States - these would essentially be compute states
that have an additional (manually defined) function that can be used to
nudge the source states so that they result in the computed states
having a given value. For example, you could use set the `IsPaused`
state, and it would attempt to pause or unpause the game by modifying
the `AppState` as needed.
- Closure-based state modification - this would involve adding a
`NextState.modify(f: impl Fn(Option<S> -> Option<S>)` method, and then
you can pass in closures or function pointers to adjust the state as
needed.
- Message-based state modification - this would involve either creating
states that can respond to specific messages, similar to Elm or Redux.
These could either use the `NextState` mechanism or the Event mechanism.
- ~`SubStates` - which are essentially a hybrid of computed and manual
states. In the simplest (and most likely) version, they would work by
having a computed element that determines whether the state should
exist, and if it should has the capacity to add a new version in, but
then any changes to it's content would be freely mutated.~ this feature
is now part of this PR. See above.
- Lastly, since states are getting more complex there might be value in
moving them out of `bevy_ecs` and into their own crate, or at least out
of the `schedule` module into a `states` module. #11087
As mentioned, all these future work elements are TBD and are explicitly
not part of this PR - I just wanted to provide them as potential
explorations for the future.
---------
Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
Co-authored-by: Marcel Champagne <voiceofmarcel@gmail.com>
Co-authored-by: MiniaczQ <xnetroidpl@gmail.com>
# Objective
- `README.md` is a common file that usually gives an overview of the
folder it is in.
- When on <https://crates.io>, `README.md` is rendered as the main
description.
- Many crates in this repository are lacking `README.md` files, which
makes it more difficult to understand their purpose.
<img width="1552" alt="image"
src="https://github.com/bevyengine/bevy/assets/59022059/78ebf91d-b0c4-4b18-9874-365d6310640f">
- There are also a few inconsistencies with `README.md` files that this
PR and its follow-ups intend to fix.
## Solution
- Create a `README.md` file for all crates that do not have one.
- This file only contains the title of the crate (underscores removed,
proper capitalization, acronyms expanded) and the <https://shields.io>
badges.
- Remove the `readme` field in `Cargo.toml` for `bevy` and
`bevy_reflect`.
- This field is redundant because [Cargo automatically detects
`README.md`
files](https://doc.rust-lang.org/cargo/reference/manifest.html#the-readme-field).
The field is only there if you name it something else, like `INFO.md`.
- Fix capitalization of `bevy_utils`'s `README.md`.
- It was originally `Readme.md`, which is inconsistent with the rest of
the project.
- I created two commits renaming it to `README.md`, because Git appears
to be case-insensitive.
- Expand acronyms in title of `bevy_ptr` and `bevy_utils`.
- In the commit where I created all the new `README.md` files, I
preferred using expanded acronyms in the titles. (E.g. "Bevy Developer
Tools" instead of "Bevy Dev Tools".)
- This commit changes the title of existing `README.md` files to follow
the same scheme.
- I do not feel strongly about this change, please comment if you
disagree and I can revert it.
- Add <https://shields.io> badges to `bevy_time` and `bevy_transform`,
which are the only crates currently lacking them.
---
## Changelog
- Added `README.md` files to all crates missing it.
This commit expands Bevy's existing tonemapping feature to a complete
set of filmic color grading tools, matching those of engines like Unity,
Unreal, and Godot. The following features are supported:
* White point adjustment. This is inspired by Unity's implementation of
the feature, but simplified and optimized. *Temperature* and *tint*
control the adjustments to the *x* and *y* chromaticity values of [CIE
1931]. Following Unity, the adjustments are made relative to the [D65
standard illuminant] in the [LMS color space].
* Hue rotation. This simply converts the RGB value to [HSV], alters the
hue, and converts back.
* Color correction. This allows the *gamma*, *gain*, and *lift* values
to be adjusted according to the standard [ASC CDL combined function].
* Separate color correction for shadows, midtones, and highlights.
Blender's source code was used as a reference for the implementation of
this. The midtone ranges can be adjusted by the user. To avoid abrupt
color changes, a small crossfade is used between the different sections
of the image, again following Blender's formulas.
A new example, `color_grading`, has been added, offering a GUI to change
all the color grading settings. It uses the same test scene as the
existing `tonemapping` example, which has been factored out into a
shared glTF scene.
[CIE 1931]: https://en.wikipedia.org/wiki/CIE_1931_color_space
[D65 standard illuminant]:
https://en.wikipedia.org/wiki/Standard_illuminant#Illuminant_series_D
[LMS color space]: https://en.wikipedia.org/wiki/LMS_color_space
[HSV]: https://en.wikipedia.org/wiki/HSL_and_HSV
[ASC CDL combined function]:
https://en.wikipedia.org/wiki/ASC_CDL#Combined_Function
## Changelog
### Added
* Many new filmic color grading options have been added to the
`ColorGrading` component.
## Migration Guide
* `ColorGrading::gamma` and `ColorGrading::pre_saturation` are now set
separately for the `shadows`, `midtones`, and `highlights` sections. You
can migrate code with the `ColorGrading::all_sections` and
`ColorGrading::all_sections_mut` functions, which access and/or update
all sections at once.
* `ColorGrading::post_saturation` and `ColorGrading::exposure` are now
fields of `ColorGrading::global`.
## Screenshots
![Screenshot 2024-04-27
143144](https://github.com/bevyengine/bevy/assets/157897/c1de5894-917d-4101-b5c9-e644d141a941)
![Screenshot 2024-04-27
143216](https://github.com/bevyengine/bevy/assets/157897/da393c8a-d747-42f5-b47c-6465044c788d)
# Objective
Make compile fail tests less likely to break with new Rust versions.
Closes#12627
## Solution
Switch from [`trybuild`](https://github.com/dtolnay/trybuild) to
[`ui_test`](https://github.com/oli-obk/ui_test).
## TODO
- [x] Update `bevy_ecs_compile_fail_tests`
- [x] Update `bevy_macros_compile_fail_tests`
- [x] Update `bevy_reflect_compile_fail_tests`
---------
Co-authored-by: Oli Scherer <github35764891676564198441@oli-obk.de>
Co-authored-by: BD103 <59022059+BD103@users.noreply.github.com>
https://github.com/bevyengine/bevy/assets/2632925/e046205e-3317-47c3-9959-fc94c529f7e0
# Objective
- Adds per-object motion blur to the core 3d pipeline. This is a common
effect used in games and other simulations.
- Partially resolves#4710
## Solution
- This is a post-process effect that uses the depth and motion vector
buffers to estimate per-object motion blur. The implementation is
combined from knowledge from multiple papers and articles. The approach
itself, and the shader are quite simple. Most of the effort was in
wiring up the bevy rendering plumbing, and properly specializing for HDR
and MSAA.
- To work with MSAA, the MULTISAMPLED_SHADING wgpu capability is
required. I've extracted this code from #9000. This is because the
prepass buffers are multisampled, and require accessing with
`textureLoad` as opposed to the widely compatible `textureSample`.
- Added an example to demonstrate the effect of motion blur parameters.
## Future Improvements
- While this approach does have limitations, it's one of the most
commonly used, and is much better than camera motion blur, which does
not consider object velocity. For example, this implementation allows a
dolly to track an object, and that object will remain unblurred while
the background is blurred. The biggest issue with this implementation is
that blur is constrained to the boundaries of objects which results in
hard edges. There are solutions to this by either dilating the object or
the motion vector buffer, or by taking a different approach such as
https://casual-effects.com/research/McGuire2012Blur/index.html
- I'm using a noise PRNG function to jitter samples. This could be
replaced with a blue noise texture lookup or similar, however after
playing with the parameters, it gives quite nice results with 4 samples,
and is significantly better than the artifacts generated when not
jittering.
---
## Changelog
- Added: per-object motion blur. This can be enabled and configured by
adding the `MotionBlurBundle` to a camera entity.
---------
Co-authored-by: Torstein Grindvik <52322338+torsteingrindvik@users.noreply.github.com>
# Objective
Resolve#10054.
## Solution
Make dynamic linking a no-op by omitting it from the dependency tree on
wasm targets.
To test this, try `cargo build --lib --target wasm32-unknown-unknown
--features bevy/dynamic_linking` with and without this PR.
Might need to update the book on the website to explain this when this
makes it into a release.
Co-Authored By: @daxpedda
---------
Co-authored-by: BD103 <59022059+BD103@users.noreply.github.com>
# Objective
- animating a sprite in response to an event is a [common beginner
problem](https://www.reddit.com/r/bevy/comments/13xx4v7/sprite_animation_in_bevy/)
## Solution
- provide a simple example to show how to animate a sprite in response
to an event
---------
Co-authored-by: François Mockers <francois.mockers@vleue.com>
# Objective
- Fix some doc warnings
- Add doc-scrape-examples to all examples
Moved from #12692
I run `cargo +nightly doc --workspace --all-features --no-deps
-Zunstable-options -Zrustdoc-scrape-examples`
<details>
```
warning: public documentation for `GzAssetLoaderError` links to private item `GzAssetLoader`
--> examples/asset/asset_decompression.rs:24:47
|
24 | /// Possible errors that can be produced by [`GzAssetLoader`]
| ^^^^^^^^^^^^^ this item is private
|
= note: this link will resolve properly if you pass `--document-private-items`
= note: `#[warn(rustdoc::private_intra_doc_links)]` on by default
warning: `bevy` (example "asset_decompression") generated 1 warning
warning: unresolved link to `shape::Quad`
--> examples/2d/mesh2d.rs:3:15
|
3 | //! [`Quad`]: shape::Quad
| ^^^^^^^^^^^ no item named `shape` in scope
|
= note: `#[warn(rustdoc::broken_intra_doc_links)]` on by default
warning: `bevy` (example "mesh2d") generated 1 warning
warning: unresolved link to `WorldQuery`
--> examples/ecs/custom_query_param.rs:1:49
|
1 | //! This example illustrates the usage of the [`WorldQuery`] derive macro, which allows
| ^^^^^^^^^^ no item named `WorldQuery` in scope
|
= help: to escape `[` and `]` characters, add '\' before them like `\[` or `\]`
= note: `#[warn(rustdoc::broken_intra_doc_links)]` on by default
warning: `bevy` (example "custom_query_param") generated 1 warning
warning: unresolved link to `shape::Quad`
--> examples/2d/mesh2d_vertex_color_texture.rs:4:15
|
4 | //! [`Quad`]: shape::Quad
| ^^^^^^^^^^^ no item named `shape` in scope
|
= note: `#[warn(rustdoc::broken_intra_doc_links)]` on by default
warning: `bevy` (example "mesh2d_vertex_color_texture") generated 1 warning
warning: public documentation for `TextPlugin` links to private item `CoolText`
--> examples/asset/processing/asset_processing.rs:48:9
|
48 | /// * [`CoolText`]: a custom RON text format that supports dependencies and embedded dependencies
| ^^^^^^^^ this item is private
|
= note: this link will resolve properly if you pass `--document-private-items`
= note: `#[warn(rustdoc::private_intra_doc_links)]` on by default
warning: public documentation for `TextPlugin` links to private item `Text`
--> examples/asset/processing/asset_processing.rs:49:9
|
49 | /// * [`Text`]: a "normal" plain text file
| ^^^^ this item is private
|
= note: this link will resolve properly if you pass `--document-private-items`
warning: public documentation for `TextPlugin` links to private item `CoolText`
--> examples/asset/processing/asset_processing.rs:51:57
|
51 | /// It also defines an asset processor that will load [`CoolText`], resolve embedded dependenc...
| ^^^^^^^^ this item is private
|
= note: this link will resolve properly if you pass `--document-private-items`
warning: `bevy` (example "asset_processing") generated 3 warnings
warning: public documentation for `CustomAssetLoaderError` links to private item `CustomAssetLoader`
--> examples/asset/custom_asset.rs:20:47
|
20 | /// Possible errors that can be produced by [`CustomAssetLoader`]
| ^^^^^^^^^^^^^^^^^ this item is private
|
= note: this link will resolve properly if you pass `--document-private-items`
= note: `#[warn(rustdoc::private_intra_doc_links)]` on by default
warning: public documentation for `BlobAssetLoaderError` links to private item `CustomAssetLoader`
--> examples/asset/custom_asset.rs:61:47
|
61 | /// Possible errors that can be produced by [`CustomAssetLoader`]
| ^^^^^^^^^^^^^^^^^ this item is private
|
= note: this link will resolve properly if you pass `--document-private-items`
```
```
warning: `bevy` (example "mesh2d") generated 1 warning
warning: public documentation for `log_layers_ecs` links to private item `update_subscriber`
--> examples/app/log_layers_ecs.rs:6:18
|
6 | //! Inside the [`update_subscriber`] function we will create a [`mpsc::Sender`] and a [`mpsc::R...
| ^^^^^^^^^^^^^^^^^ this item is private
|
= note: this link will resolve properly if you pass `--document-private-items`
= note: `#[warn(rustdoc::private_intra_doc_links)]` on by default
warning: unresolved link to `AdvancedLayer`
--> examples/app/log_layers_ecs.rs:7:72
|
7 | ... will go into the [`AdvancedLayer`] and the [`Receiver`](mpsc::Receiver) will
| ^^^^^^^^^^^^^ no item named `AdvancedLayer` in scope
|
= help: to escape `[` and `]` characters, add '\' before them like `\[` or `\]`
= note: `#[warn(rustdoc::broken_intra_doc_links)]` on by default
warning: unresolved link to `LogEvents`
--> examples/app/log_layers_ecs.rs:8:42
|
8 | //! go into a non-send resource called [`LogEvents`] (It has to be non-send because [`Receiver`...
| ^^^^^^^^^ no item named `LogEvents` in scope
|
= help: to escape `[` and `]` characters, add '\' before them like `\[` or `\]`
warning: public documentation for `log_layers_ecs` links to private item `transfer_log_events`
--> examples/app/log_layers_ecs.rs:9:30
|
9 | //! From there we will use [`transfer_log_events`] to transfer log events from [`LogEvents`] to...
| ^^^^^^^^^^^^^^^^^^^ this item is private
|
= note: this link will resolve properly if you pass `--document-private-items`
warning: unresolved link to `LogEvents`
--> examples/app/log_layers_ecs.rs:9:82
|
9 | ...nsfer log events from [`LogEvents`] to an ECS event called [`LogEvent`].
| ^^^^^^^^^ no item named `LogEvents` in scope
|
= help: to escape `[` and `]` characters, add '\' before them like `\[` or `\]`
warning: public documentation for `log_layers_ecs` links to private item `LogEvent`
--> examples/app/log_layers_ecs.rs:9:119
|
9 | ...nts`] to an ECS event called [`LogEvent`].
| ^^^^^^^^ this item is private
|
= note: this link will resolve properly if you pass `--document-private-items`
warning: public documentation for `log_layers_ecs` links to private item `LogEvent`
--> examples/app/log_layers_ecs.rs:11:49
|
11 | //! Finally, after all that we can access the [`LogEvent`] event from our systems and use it.
| ^^^^^^^^ this item is private
|
= note: this link will resolve properly if you pass `--document-private-items`
```
<details/>
# Objective
-
[`clippy::ref_as_ptr`](https://rust-lang.github.io/rust-clippy/master/index.html#/ref_as_ptr)
prevents you from directly casting references to pointers, requiring you
to use `std::ptr::from_ref` instead. This prevents you from accidentally
converting an immutable reference into a mutable pointer (`&x as *mut
T`).
- Follow up to #11818, now that our [`rust-version` is
1.77](11817f4ba4/Cargo.toml (L14)).
## Solution
- Enable lint and fix all warnings.
Allows the user to select a scene to load, then a loading screen is
shown until all assets are loaded, and pipelines compiled.
# Objective
- Fixes#12654
## Solution
- Add desired assets to be monitored to a list.
- While there are assets that are not fully loaded, show a loading
screen.
- Once all assets are loaded, and pipelines compiled, show the scene
that was loaded.
# Objective
- Fixes#12411
- Add an example demonstrating the usage of asset meta files.
## Solution
- Add a new example displaying a basic scene of three pixelated images
- Apply a .meta file to one of the assets setting Nearest filtering
- Use AssetServer::load_with_settings on the last one as another way to
achieve the same effect
- The result is one blurry image and two crisp images demonstrating a
common scenario in which changing settings are useful.
# Objective
- It's pretty common for users to want to read data back from the gpu
and into the main world
## Solution
- Add a simple example that shows how to read data back from the gpu and
send it to the main world using a channel.
- The example is largely based on this wgpu example but adapted to bevy
-
fb305b85f6/examples/src/repeated_compute/mod.rs
---------
Co-authored-by: stormy <120167078+stowmyy@users.noreply.github.com>
Co-authored-by: Torstein Grindvik <52322338+torsteingrindvik@users.noreply.github.com>
# Objective
Fix#11931
## Solution
- Make stepping a non-default feature
- Adjust documentation and examples
- In particular, make the breakout example not show the stepping prompt
if compiled without the feature (shows a log message instead)
---
## Changelog
- Removed `bevy_debug_stepping` from default features
## Migration Guide
The system-by-system stepping feature is now disabled by default; to use
it, enable the `bevy_debug_stepping` feature explicitly:
```toml
[dependencies]
bevy = { version = "0.14", features = ["bevy_debug_stepping"] }
```
Code using
[`Stepping`](https://docs.rs/bevy/latest/bevy/ecs/schedule/struct.Stepping.html)
will still compile with the feature disabled, but will print a runtime
error message to the console if the application attempts to enable
stepping.
---------
Co-authored-by: James Liu <contact@jamessliu.com>
Co-authored-by: François Mockers <francois.mockers@vleue.com>
# Objective
- Since #12453, `DeterministicRenderingConfig` doesn't do anything
## Solution
- Remove it
---
## Migration Guide
- Removed `DeterministicRenderingConfig`. There shouldn't be any z
fighting anymore in the rendering even without setting
`stable_sort_z_fighting`
Created soundtrack example, fade-in and fade-out features, added new
assets, and updated credits.
# Objective
- Fixes#12651
## Solution
- Created a resource to hold the track list.
- The audio assets are then loaded by the asset server and added to the
track list.
- Once the game is in a specific state, an `AudioBundle` is spawned and
plays the appropriate track.
- The audio volume starts at zero and is then incremented gradually
until it reaches full volume.
- Once the game state changes, the current track fades out, and a new
one fades in at the same time, offering a relatively seamless
transition.
- Once a track is completely faded out, it is despawned from the app.
- Game state changes are simulated through a `Timer` for simplicity.
- Track change system is only run if there is a change in the
`GameState` resource.
- All tracks are used according to their respective licenses.
---------
Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
# Objective
Wireframes are currently supported for 3D meshes using the
`WireframePlugin` in `bevy_pbr`. This PR adds the same functionality for
2D meshes.
Closes#5881.
## Solution
Since there's no easy way to share material implementations between 2D,
3D, and UI, this is mostly a straight copy and rename from the original
plugin into `bevy_sprite`.
<img width="1392" alt="image"
src="https://github.com/bevyengine/bevy/assets/3961616/7aca156f-448a-4c7e-89b8-0a72c5919769">
---
## Changelog
- Added `Wireframe2dPlugin` and related types to support 2D wireframes.
- Added an example to demonstrate how to use 2D wireframes
---------
Co-authored-by: IceSentry <IceSentry@users.noreply.github.com>