# Objective
Simplify bind group creation code. alternative to (and based on) #9476
## Solution
- Add a `BindGroupEntries` struct that can transparently be used where
`&[BindGroupEntry<'b>]` is required in BindGroupDescriptors.
Allows constructing the descriptor's entries as:
```rust
render_device.create_bind_group(
"my_bind_group",
&my_layout,
&BindGroupEntries::with_indexes((
(2, &my_sampler),
(3, my_uniform),
)),
);
```
instead of
```rust
render_device.create_bind_group(
"my_bind_group",
&my_layout,
&[
BindGroupEntry {
binding: 2,
resource: BindingResource::Sampler(&my_sampler),
},
BindGroupEntry {
binding: 3,
resource: my_uniform,
},
],
);
```
or
```rust
render_device.create_bind_group(
"my_bind_group",
&my_layout,
&BindGroupEntries::sequential((&my_sampler, my_uniform)),
);
```
instead of
```rust
render_device.create_bind_group(
"my_bind_group",
&my_layout,
&[
BindGroupEntry {
binding: 0,
resource: BindingResource::Sampler(&my_sampler),
},
BindGroupEntry {
binding: 1,
resource: my_uniform,
},
],
);
```
the structs has no user facing macros, is tuple-type-based so stack
allocated, and has no noticeable impact on compile time.
- Also adds a `DynamicBindGroupEntries` struct with a similar api that
uses a `Vec` under the hood and allows extending the entries.
- Modifies `RenderDevice::create_bind_group` to take separate arguments
`label`, `layout` and `entries` instead of a `BindGroupDescriptor`
struct. The struct can't be stored due to the internal references, and
with only 3 members arguably does not add enough context to justify
itself.
- Modify the codebase to use the new api and the `BindGroupEntries` /
`DynamicBindGroupEntries` structs where appropriate (whenever the
entries slice contains more than 1 member).
## Migration Guide
- Calls to `RenderDevice::create_bind_group({BindGroupDescriptor {
label, layout, entries })` must be amended to
`RenderDevice::create_bind_group(label, layout, entries)`.
- If `label`s have been specified as `"bind_group_name".into()`, they
need to change to just `"bind_group_name"`. `Some("bind_group_name")`
and `None` will still work, but `Some("bind_group_name")` can optionally
be simplified to just `"bind_group_name"`.
---------
Co-authored-by: IceSentry <IceSentry@users.noreply.github.com>
# Objective
- bump naga_oil to 0.10
- update shader imports to use rusty syntax
## Migration Guide
naga_oil 0.10 reworks the import mechanism to support more syntax to
make it more rusty, and test for item use before importing to determine
which imports are modules and which are items, which allows:
- use rust-style imports
```
#import bevy_pbr::{
pbr_functions::{alpha_discard as discard, apply_pbr_lighting},
mesh_bindings,
}
```
- import partial paths:
```
#import part::of::path
...
path::remainder::function();
```
which will call to `part::of::path::remainder::function`
- use fully qualified paths without importing:
```
// #import bevy_pbr::pbr_functions
bevy_pbr::pbr_functions::pbr()
```
- use imported items without qualifying
```
#import bevy_pbr::pbr_functions::pbr
// for backwards compatibility the old style is still supported:
// #import bevy_pbr::pbr_functions pbr
...
pbr()
```
- allows most imported items to end with `_` and numbers (naga_oil#30).
still doesn't allow struct members to end with `_` or numbers but it's
progress.
- the vast majority of existing shader code will work without changes,
but will emit "deprecated" warnings for old-style imports. these can be
suppressed with the `allow-deprecated` feature.
- partly breaks overrides (as far as i'm aware nobody uses these yet) -
now overrides will only be applied if the overriding module is added as
an additional import in the arguments to `Composer::make_naga_module` or
`Composer::add_composable_module`. this is necessary to support
determining whether imports are modules or items.
# Objective
This PR aims to make it so that we don't accidentally go over
`MAX_TEXTURE_IMAGE_UNITS` (in WebGL) or
`maxSampledTexturesPerShaderStage` (in WebGPU), giving us some extra
leeway to add more view bind group textures.
(This PR is extracted from—and unblocks—#8015)
## Solution
- We replace the existing `view_layout` and `view_layout_multisampled`
pair with an array of 32 bind group layouts, generated ahead of time;
- For now, these layouts cover all the possible combinations of:
`multisampled`, `depth_prepass`, `normal_prepass`,
`motion_vector_prepass` and `deferred_prepass`:
- In the future, as @JMS55 pointed out, we can likely take out
`motion_vector_prepass` and `deferred_prepass`, as these are not really
needed for the mesh pipeline and can use separate pipelines. This would
bring the possible combinations down to 8;
- We can also add more "optional" textures as they become needed,
allowing the engine to scale to a wider variety of use cases in lower
end/web environments (e.g. some apps might just want normal and depth
prepasses, others might only want light probes), while still keeping a
high ceiling for high end native environments where more textures are
supported.
- While preallocating bind group layouts is relatively cheap, the number
of combinations grows exponentially, so we should likely limit ourselves
to something like at most 256–1024 total layouts until we find a better
solution (like generating them lazily)
- To make this mechanism a little bit more explicit/discoverable, so
that compatibility with WebGPU/WebGL is not broken by accident, we add a
`MESH_PIPELINE_VIEW_LAYOUT_SAFE_MAX_TEXTURES` const and warn whenever
the number of textures in the layout crosses it.
- The warning is gated by `#[cfg(debug_assertions)]` and not issued in
release builds;
- We're counting the actual textures in the bind group layout instead of
using some roundabout metric so it should be accurate;
- Right now `MESH_PIPELINE_VIEW_LAYOUT_SAFE_MAX_TEXTURES` is set to 10
in order to leave 6 textures free for other groups;
- Currently there's no combination that would cause us to go over the
limit, but that will change once #8015 lands.
---
## Changelog
- `MeshPipeline` view bind group layouts now vary based on the current
multisampling and prepass states, saving a couple of texture binding
entries when prepasses are not in use.
## Migration Guide
- `MeshPipeline::view_layout` and
`MeshPipeline::view_layout_multisampled` have been replaced with a
private array to accomodate for variable view bind group layouts. To
obtain a view bind group layout for the current pipeline state, use the
new `MeshPipeline::get_view_layout()` or
`MeshPipeline::get_view_layout_from_key()` methods.
# Objective
Users shouldn't need to change their source code between "development
workflows" and "releasing". Currently, Bevy Asset V2 has two "processed"
asset modes `Processed` (assumes assets are already processed) and
`ProcessedDev` (starts an asset processor and processes assets). This
means that the mode must be changed _in code_ when switching from "app
dev" to "release". Very suboptimal.
We have already removed "runtime opt-in" for hot-reloading. Enabling the
`file_watcher` feature _automatically_ enables file watching in code.
This means deploying a game (without hot reloading enabled) just means
calling `cargo build --release` instead of `cargo run --features
bevy/file_watcher`.
We should adopt this pattern for asset processing.
## Solution
This adds the `asset_processor` feature, which will start the
`AssetProcessor` when an `AssetPlugin` runs in `AssetMode::Processed`.
The "asset processing workflow" is now:
1. Enable `AssetMode::Processed` on `AssetPlugin`
2. When developing, run with the `asset_processor` and `file_watcher`
features
3. When releasing, build without these features.
The `AssetMode::ProcessedDev` mode has been removed.
# Objective
I encountered a problem where I had a plugin `FooPlugin` which did
```rust
impl Plugin for FooPlugin {
fn build(&self, app: &mut App) {
app
.register_asset_source(...); // more stuff after
}
}
```
And when I tried using it, e.g.
```rust
asset_server.load("foo://data/asset.custom");
```
I got an error that `foo` was not recognized as a source.
I found that this is because asset sources must be registered _before_
`AssetPlugin` is added, and I had `FooPlugin` _after_.
## Solution
Add clarifying note about having to register sources before
`AssetPlugin` is added.
Signed-off-by: Torstein Grindvik <torstein.grindvik@muybridge.com>
Co-authored-by: Torstein Grindvik <torstein.grindvik@muybridge.com>
# Objective
- Provides actionable feedback when users encounter the error in
https://github.com/bevyengine/bevy/issues/10162
- Complements https://github.com/bevyengine/bevy/pull/10186
## Solution
- Log an error when registering an AssetSource after the AssetPlugin has
been built (via DefaultPlugins). This will let users know that their
registration order needs changing
The outputted error message will look like this:
```rust
ERROR bevy_asset::server: 'AssetSourceId::Name(test)' must be registered before `AssetPlugin` (typically added as part of `DefaultPlugins`)
```
---------
Co-authored-by: 66OJ66 <hi0obxud@anonaddy.me>
Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
# Objective
Fixes#10177 .
## Solution
Added a new run condition and tweaked the docs for `on_timer`.
## Changelog
### Added
- `on_real_time_timer` run condition
# Objective
- Make it possible to move the light position around in the
`shadow_biases` example
- Also support resetting the depth/normal biases to the engine defaults,
or zero.
## Solution
- The light position is displayed in the text overlay.
- The light position can be adjusted with
left/right/up/down/pgup/pgdown.
- The depth/normal biases can be reset to defaults by pressing R, or to
zero by pressing Z.
---------
Co-authored-by: robtfm <50659922+robtfm@users.noreply.github.com>
# Objective
- Demonstrate the different shadow PCF methods in the `shadow_biases`
example
## Solution
- Cycle through the available methods when pressing the `F` key
- Display which filter method is being used
# Objective
This PR addresses the issue where Bevy displays one or several black
frames before the scene is first rendered. This is particularly
noticeable on iOS, where the black frames disrupt the transition from
the launch screen to the game UI. I have written about my search to
solve this issue on the Bevy discord:
https://discord.com/channels/691052431525675048/1151047604520632352
While I can attest this PR works on both iOS and Linux/Wayland (and even
seems to resolve a slight flicker during startup with the latter as
well), I'm not familiar enough with Bevy to judge the full implications
of these changes. I hope a reviewer or tester can help me confirm
whether this is the right approach, or what might be a cleaner solution
to resolve this issue.
## Solution
I have moved the "startup phase" as well as the plugin finalization into
the `app.run()` function so those things finish synchronously before the
"main schedule" starts. I even move one frame forward as well, using
`app.update()`, to make sure the rendering has caught up with the state
of the finalized plugins as well.
I admit that part of this was achieved through trial-and-error, since
not doing the "startup phase" *before* `app.finish()` resulted in
panics, while not calling an extra `app.update()` didn't fully resolve
the issue.
What I *can* say, is that the iOS launch screen animation works in such
a way that the OS initiates the transition once the framework's
[`didFinishLaunching()`](https://developer.apple.com/documentation/uikit/uiapplicationdelegate/1622921-application)
returns, meaning app developers **must** finish setting up their UI
before that function returns. This is what basically led me on the path
to try to "finish stuff earlier" :)
## Changelog
### Changed
- The startup phase and the first frame are rendered synchronously when
calling `app.run()`, before the "main schedule" is started. This fixes
black frames during the iOS launch transition and possible flickering on
other platforms, but may affect initialization order in your
application.
## Migration Guide
Because of this change, the timing of the first few frames might have
changed, and I think it could be that some things one may expect to be
initialized in a system may no longer be. To be honest, I feel out of my
depth to judge the exact impact here.
# Objective
Add a way to easily compute the up-to-date `GlobalTransform` of an
entity.
## Solution
Add the `TransformHelper`(Name pending) system parameter with the
`compute_global_transform` method that takes an `Entity` and returns a
`GlobalTransform` if successful.
## Changelog
- Added the `TransformHelper` system parameter for computing the
up-to-date `GlobalTransform` of an entity.
---------
Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
Co-authored-by: Noah <noahshomette@gmail.com>
# Objective
- Fixes#9382
## Solution
- Added a few extra notes in regards to WebGPU experimental state and
the need of enabling unstable APIs through certain attribute flags in
`cargo_features.md` and the examples `README.md` files.
# Objective
- Add a header for contributing to make it easier for people new to
github for getting started.
- Adding links that point to issues and PRs
## Solution
- Updated the README.md to show a contributing section under the
Community header
- Added links in the section to point to issued and PRs
# Objective
allow extending `Material`s (including the built in `StandardMaterial`)
with custom vertex/fragment shaders and additional data, to easily get
pbr lighting with custom modifications, or otherwise extend a base
material.
# Solution
- added `ExtendedMaterial<B: Material, E: MaterialExtension>` which
contains a base material and a user-defined extension.
- added example `extended_material` showing how to use it
- modified AsBindGroup to have "unprepared" functions that return raw
resources / layout entries so that the extended material can combine
them
note: doesn't currently work with array resources, as i can't figure out
how to make the OwnedBindingResource::get_binding() work, as wgpu
requires a `&'a[&'a TextureView]` and i have a `Vec<TextureView>`.
# Migration Guide
manual implementations of `AsBindGroup` will need to be adjusted, the
changes are pretty straightforward and can be seen in the diff for e.g.
the `texture_binding_array` example.
---------
Co-authored-by: Robert Swain <robert.swain@gmail.com>
# Objective
deferred doesn't currently run unless one of `DepthPrepass`,
`ForwardPrepass` or `MotionVectorPrepass` is also present on the camera.
## Solution
modify the `queue_prepass_material_meshes` view query to check for any
relevant phase, instead of requiring `Opaque3dPrepass` and
`AlphaMask3dPrepass` to be present
# Objective
- Correct the description of an error type for the scene loader
## Solution
- Correct the description of an error type for the scene loader
# Objective
- This PR aims to make the various `*_PREPASS` shader defs we have
(`NORMAL_PREPASS`, `DEPTH_PREPASS`, `MOTION_VECTORS_PREPASS` AND
`DEFERRED_PREPASS`) easier to use and understand:
- So that their meaning is now consistent across all contexts; (“prepass
X is enabled for the current view”)
- So that they're also consistently set across all contexts.
- It also aims to enable us to (with a follow up PR) to conditionally
gate the `BindGroupEntry` and `BindGroupLayoutEntry` items associated
with these prepasses, saving us up to 4 texture slots in WebGL
(currently globally limited to 16 per shader, regardless of bind groups)
## Solution
- We now consistently set these from `PrepassPipeline`, the
`MeshPipeline` and the `DeferredLightingPipeline`, we also set their
`MeshPipelineKey`s;
- We introduce `PREPASS_PIPELINE`, `MESH_PIPELINE` and
`DEFERRED_LIGHTING_PIPELINE` that can be used to detect where the code
is running, without overloading the meanings of the prepass shader defs;
- We also gate the WGSL functions in `bevy_pbr::prepass_utils` with
`#ifdef`s for their respective shader defs, so that shader code can
provide a fallback whenever they're not available.
- This allows us to conditionally include the bindings for these prepass
textures (My next PR, which will hopefully unblock #8015)
- @robtfm mentioned [these were being used to prevent accessing the same
binding as read/write in the
prepass](https://discord.com/channels/691052431525675048/743663924229963868/1163270458393759814),
however even after reversing the `#ifndef`s I had no issues running the
code, so perhaps the compiler is already smart enough even without tree
shaking to know they're not being used, thanks to `#ifdef
PREPASS_PIPELINE`?
## Comparison
### Before
| Shader Def | `PrepassPipeline` | `MeshPipeline` |
`DeferredLightingPipeline` |
| ------------------------ | ----------------- | -------------- |
-------------------------- |
| `NORMAL_PREPASS` | Yes | No | No |
| `DEPTH_PREPASS` | Yes | No | No |
| `MOTION_VECTORS_PREPASS` | Yes | No | No |
| `DEFERRED_PREPASS` | Yes | No | No |
| View Key | `PrepassPipeline` | `MeshPipeline` |
`DeferredLightingPipeline` |
| ------------------------ | ----------------- | -------------- |
-------------------------- |
| `NORMAL_PREPASS` | Yes | Yes | No |
| `DEPTH_PREPASS` | Yes | No | No |
| `MOTION_VECTORS_PREPASS` | Yes | No | No |
| `DEFERRED_PREPASS` | Yes | Yes\* | No |
\* Accidentally was being set twice, once with only
`deferred_prepass.is_some()` as a condition,
and once with `deferred_p repass.is_some() && !forward` as a condition.
### After
| Shader Def | `PrepassPipeline` | `MeshPipeline` |
`DeferredLightingPipeline` |
| ---------------------------- | ----------------- | --------------- |
-------------------------- |
| `NORMAL_PREPASS` | Yes | Yes | Yes |
| `DEPTH_PREPASS` | Yes | Yes | Yes |
| `MOTION_VECTORS_PREPASS` | Yes | Yes | Yes |
| `DEFERRED_PREPASS` | Yes | Yes | Unconditionally |
| `PREPASS_PIPELINE` | Unconditionally | No | No |
| `MESH_PIPELINE` | No | Unconditionally | No |
| `DEFERRED_LIGHTING_PIPELINE` | No | No | Unconditionally |
| View Key | `PrepassPipeline` | `MeshPipeline` |
`DeferredLightingPipeline` |
| ------------------------ | ----------------- | -------------- |
-------------------------- |
| `NORMAL_PREPASS` | Yes | Yes | Yes |
| `DEPTH_PREPASS` | Yes | Yes | Yes |
| `MOTION_VECTORS_PREPASS` | Yes | Yes | Yes |
| `DEFERRED_PREPASS` | Yes | Yes | Unconditionally |
---
## Changelog
- Cleaned up WGSL `*_PREPASS` shader defs so they're now consistently
used everywhere;
- Introduced `PREPASS_PIPELINE`, `MESH_PIPELINE` and
`DEFERRED_LIGHTING_PIPELINE` WGSL shader defs for conditionally
compiling logic based the current pipeline;
- WGSL functions from `bevy_pbr::prepass_utils` are now guarded with
`#ifdef` based on the currently enabled prepasses;
## Migration Guide
- When using functions from `bevy_pbr::prepass_utils`
(`prepass_depth()`, `prepass_normal()`, `prepass_motion_vector()`) in
contexts where these prepasses might be disabled, you should now wrap
your calls with the appropriate `#ifdef` guards, (`#ifdef
DEPTH_PREPASS`, `#ifdef NORMAL_PREPASS`, `#ifdef MOTION_VECTOR_PREPASS`)
providing fallback logic where applicable.
---------
Co-authored-by: Carter Anderson <mcanders1@gmail.com>
Co-authored-by: IceSentry <IceSentry@users.noreply.github.com>
# Objective
Fixes#10086
## Solution
Instead of serializing via `DynamicTypePath::reflect_type_path`, now
uses the `TypePath` found on the `TypeInfo` returned by
`Reflect::get_represented_type_info`.
This issue was happening because the dynamic types implement `TypePath`
themselves and do not (and cannot) forward their proxy's `TypePath`
data. The solution was to access the proxy's type information in order
to get the correct `TypePath` data.
## Changed
- The `Debug` impl for `TypePathTable` now includes output for all
fields.
# Objective
- Job cargo deny for bans is failing too often to be useful
- Having only one version of all dependencies is not realistic
## Solution
- Only warn on multiple dependencies of a crate
- Deny some specific crates that we know shouldn't be present multiple
time
# Objective
- We should encourage of the simpler to reason about chain.
## Solution
- Use it in the breakout example
---
## Changelog
- Switch breakout to use `chain` instead of `before` and `after`
# Objective
Time clamping happens consistently for apps that load non-trivial
things. While this _is_ an indicator that we should probably try to move
this work to a thread that doesn't block the Update, this is common
enough (and unactionable enough) that I think we should demote it for
now.
```
2023-10-16T18:46:14.918781Z WARN bevy_time::virt: delta time larger than maximum delta, clamping delta to 250ms and skipping 63.649253ms
2023-10-16T18:46:15.178048Z WARN bevy_time::virt: delta time larger than maximum delta, clamping delta to 250ms and skipping 1.71611ms
```
## Solution
Change `warn` to `debug` for this message
# Objective
While pointing someone to the profiling doc, I saw a source link and
thought "hm, I wonder if that link is up-to-date?"
After clicking on it, I realized that it wasn't even attempting to point
to the right line -- probably a good idea since that would be super
prone to breakage.
However, the system being referenced is pub and the docs are on docs.rs,
so we can just link there. This gets the content straight onto the
user's screen.
## Solution
Change source link to docs link
## Note
This is slightly awkward in that the profiling docs themselves aren't
rendered anywhere and just live in the repo. It does feel more correct
to link to in-repo code on the same branch.
# Objective
Closes#10107
The visibility of `bevy::core::update_frame_count` should be `pub` so it
can be used in third-party code like this:
```rust
impl Plugin for MyPlugin {
fn build(&self, app: &mut App) {
app.add_systems(Last, use_frame_count.before(bevy::core::update_frame_count));
}
}
```
## Solution
Make `bevy::core::update_frame_count` public.
---
## Changelog
### Added
- Documentation for `bevy::core::update_frame_count`
### Changed
- Visibility of `bevy::core::update_frame_count` is now `pub`
# Objective
- Added support for newer AMD Radeon cards in the mod.rs file located at
``crates/bevy_render/src/view/window/mod.rs``
## Solution
- All I needed to add was ``name.starts_with("Radeon") ||`` to the
existing code on line 347 of
``crates/bevy_render/src/view/window/mod.rs``
---
## Changelog
- Changed ``crates/bevy_render/src/view/window/mod.rs``
# Objective
Current `FixedTime` and `Time` have several problems. This pull aims to
fix many of them at once.
- If there is a longer pause between app updates, time will jump forward
a lot at once and fixed time will iterate on `FixedUpdate` for a large
number of steps. If the pause is merely seconds, then this will just
mean jerkiness and possible unexpected behaviour in gameplay. If the
pause is hours/days as with OS suspend, the game will appear to freeze
until it has caught up with real time.
- If calculating a fixed step takes longer than specified fixed step
period, the game will enter a death spiral where rendering each frame
takes longer and longer due to more and more fixed step updates being
run per frame and the game appears to freeze.
- There is no way to see current fixed step elapsed time inside fixed
steps. In order to track this, the game designer needs to add a custom
system inside `FixedUpdate` that calculates elapsed or step count in a
resource.
- Access to delta time inside fixed step is `FixedStep::period` rather
than `Time::delta`. This, coupled with the issue that `Time::elapsed`
isn't available at all for fixed steps, makes it that time requiring
systems are either implemented to be run in `FixedUpdate` or `Update`,
but rarely work in both.
- Fixes#8800
- Fixes#8543
- Fixes#7439
- Fixes#5692
## Solution
- Create a generic `Time<T>` clock that has no processing logic but
which can be instantiated for multiple usages. This is also exposed for
users to add custom clocks.
- Create three standard clocks, `Time<Real>`, `Time<Virtual>` and
`Time<Fixed>`, all of which contain their individual logic.
- Create one "default" clock, which is just `Time` (or `Time<()>`),
which will be overwritten from `Time<Virtual>` on each update, and
`Time<Fixed>` inside `FixedUpdate` schedule. This way systems that do
not care specifically which time they track can work both in `Update`
and `FixedUpdate` without changes and the behaviour is intuitive.
- Add `max_delta` to virtual time update, which limits how much can be
added to virtual time by a single update. This fixes both the behaviour
after a long freeze, and also the death spiral by limiting how many
fixed timestep iterations there can be per update. Possible future work
could be adding `max_accumulator` to add a sort of "leaky bucket" time
processing to possibly smooth out jumps in time while keeping frame rate
stable.
- Many minor tweaks and clarifications to the time functions and their
documentation.
## Changelog
- `Time::raw_delta()`, `Time::raw_elapsed()` and related methods are
moved to `Time<Real>::delta()` and `Time<Real>::elapsed()` and now match
`Time` API
- `FixedTime` is now `Time<Fixed>` and matches `Time` API.
- `Time<Fixed>` default timestep is now 64 Hz, or 15625 microseconds.
- `Time` inside `FixedUpdate` now reflects fixed timestep time, making
systems portable between `Update ` and `FixedUpdate`.
- `Time::pause()`, `Time::set_relative_speed()` and related methods must
now be called as `Time<Virtual>::pause()` etc.
- There is a new `max_delta` setting in `Time<Virtual>` that limits how
much the clock can jump by a single update. The default value is 0.25
seconds.
- Removed `on_fixed_timer()` condition as `on_timer()` does the right
thing inside `FixedUpdate` now.
## Migration Guide
- Change all `Res<Time>` instances that access `raw_delta()`,
`raw_elapsed()` and related methods to `Res<Time<Real>>` and `delta()`,
`elapsed()`, etc.
- Change access to `period` from `Res<FixedTime>` to `Res<Time<Fixed>>`
and use `delta()`.
- The default timestep has been changed from 60 Hz to 64 Hz. If you wish
to restore the old behaviour, use
`app.insert_resource(Time::<Fixed>::from_hz(60.0))`.
- Change `app.insert_resource(FixedTime::new(duration))` to
`app.insert_resource(Time::<Fixed>::from_duration(duration))`
- Change `app.insert_resource(FixedTime::new_from_secs(secs))` to
`app.insert_resource(Time::<Fixed>::from_seconds(secs))`
- Change `system.on_fixed_timer(duration)` to
`system.on_timer(duration)`. Timers in systems placed in `FixedUpdate`
schedule automatically use the fixed time clock.
- Change `ResMut<Time>` calls to `pause()`, `is_paused()`,
`set_relative_speed()` and related methods to `ResMut<Time<Virtual>>`
calls. The API is the same, with the exception that `relative_speed()`
will return the actual last ste relative speed, while
`effective_relative_speed()` returns 0.0 if the time is paused and
corresponds to the speed that was set when the update for the current
frame started.
## Todo
- [x] Update pull name and description
- [x] Top level documentation on usage
- [x] Fix examples
- [x] Decide on default `max_delta` value
- [x] Decide naming of the three clocks: is `Real`, `Virtual`, `Fixed`
good?
- [x] Decide if the three clock inner structures should be in prelude
- [x] Decide on best way to configure values at startup: is manually
inserting a new clock instance okay, or should there be config struct
separately?
- [x] Fix links in docs
- [x] Decide what should be public and what not
- [x] Decide how `wrap_period` should be handled when it is changed
- [x] ~~Add toggles to disable setting the clock as default?~~ No,
separate pull if needed.
- [x] Add tests
- [x] Reformat, ensure adheres to conventions etc.
- [x] Build documentation and see that it looks correct
## Contributors
Huge thanks to @alice-i-cecile and @maniwani while building this pull.
It was a shared effort!
---------
Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
Co-authored-by: Cameron <51241057+maniwani@users.noreply.github.com>
Co-authored-by: Jerome Humbert <djeedai@gmail.com>
# Objective
Calling `asset_server.load("scene.gltf#SomeLabel")` will silently fail
if `SomeLabel` does not exist.
Referenced in #9714
## Solution
We now detect this case and return an error. I also slightly refactored
`load_internal` to make the logic / dataflow much clearer.
---------
Co-authored-by: Pascal Hertleif <killercup@gmail.com>
# Objective
As called out in #9714, Bevy Asset V2 fails to hot-reload labeled assets
whose source asset has changed (in cases where the root asset is not
alive).
## Solution
Track alive labeled assets for a given source asset and allow hot
reloads in cases where a labeled asset is still alive.
# Objective
- Since #9885, running on an iOS device crashes trying to create the
processed folder
- This only happens on real device, not on the simulator
## Solution
- Setup processed assets only if needed
# Objective
On nightly there is a warning on a missing lifetime:
```bash
warning: `&` without an explicit lifetime name cannot be used here
```
The details are in https://github.com/rust-lang/rust/issues/115010, but
the bottom line is that in associated constants elided lifetimes are no
longer allowed to be implicitly defined.
This fixes the only place where it is missing.
## Solution
- Add explicit `'static` lifetime
# Objective
Currently, the asset loader outputs
```
2023-10-14T15:11:09.328850Z WARN bevy_asset::asset_server: no `AssetLoader` found
```
when user forgets to add an extension to a file. This is very confusing
behaviour, it sounds like there aren't any asset loaders existing.
## Solution
Add an extra message on the end when there are no file extensions.
# Objective
#10105 changed the ssao input color from the material base color to
white. i can't actually see a difference in the example but there should
be one in some cases.
## Solution
change it back.
# Objective
From my understanding, although resources are not meant to be created
and removed at every frame, they are still meant to be created
dynamically during the lifetime of the App.
But because the extract_resource API does not allow optional resources
from the main world, it's impossible to use resources in the render
phase that were not created before the render sub-app itself.
## Solution
Because the ECS engine already allows for system parameters to be
`Option<Res>`, it just had to be added.
---
## Changelog
- Changed
- `extract_resource` now takes an optional main world resource
- Fixed
- `ExtractResourcePlugin` doesn't cause panics anymore if the resource
is not already inserted
This adds support for **Multiple Asset Sources**. You can now register a
named `AssetSource`, which you can load assets from like you normally
would:
```rust
let shader: Handle<Shader> = asset_server.load("custom_source://path/to/shader.wgsl");
```
Notice that `AssetPath` now supports `some_source://` syntax. This can
now be accessed through the `asset_path.source()` accessor.
Asset source names _are not required_. If one is not specified, the
default asset source will be used:
```rust
let shader: Handle<Shader> = asset_server.load("path/to/shader.wgsl");
```
The behavior of the default asset source has not changed. Ex: the
`assets` folder is still the default.
As referenced in #9714
## Why?
**Multiple Asset Sources** enables a number of often-asked-for
scenarios:
* **Loading some assets from other locations on disk**: you could create
a `config` asset source that reads from the OS-default config folder
(not implemented in this PR)
* **Loading some assets from a remote server**: you could register a new
`remote` asset source that reads some assets from a remote http server
(not implemented in this PR)
* **Improved "Binary Embedded" Assets**: we can use this system for
"embedded-in-binary assets", which allows us to replace the old
`load_internal_asset!` approach, which couldn't support asset
processing, didn't support hot-reloading _well_, and didn't make
embedded assets accessible to the `AssetServer` (implemented in this pr)
## Adding New Asset Sources
An `AssetSource` is "just" a collection of `AssetReader`, `AssetWriter`,
and `AssetWatcher` entries. You can configure new asset sources like
this:
```rust
app.register_asset_source(
"other",
AssetSource::build()
.with_reader(|| Box::new(FileAssetReader::new("other")))
)
)
```
Note that `AssetSource` construction _must_ be repeatable, which is why
a closure is accepted.
`AssetSourceBuilder` supports `with_reader`, `with_writer`,
`with_watcher`, `with_processed_reader`, `with_processed_writer`, and
`with_processed_watcher`.
Note that the "asset source" system replaces the old "asset providers"
system.
## Processing Multiple Sources
The `AssetProcessor` now supports multiple asset sources! Processed
assets can refer to assets in other sources and everything "just works".
Each `AssetSource` defines an unprocessed and processed `AssetReader` /
`AssetWriter`.
Currently this is all or nothing for a given `AssetSource`. A given
source is either processed or it is not. Later we might want to add
support for "lazy asset processing", where an `AssetSource` (such as a
remote server) can be configured to only process assets that are
directly referenced by local assets (in order to save local disk space
and avoid doing extra work).
## A new `AssetSource`: `embedded`
One of the big features motivating **Multiple Asset Sources** was
improving our "embedded-in-binary" asset loading. To prove out the
**Multiple Asset Sources** implementation, I chose to build a new
`embedded` `AssetSource`, which replaces the old `load_interal_asset!`
system.
The old `load_internal_asset!` approach had a number of issues:
* The `AssetServer` was not aware of (or capable of loading) internal
assets.
* Because internal assets weren't visible to the `AssetServer`, they
could not be processed (or used by assets that are processed). This
would prevent things "preprocessing shaders that depend on built in Bevy
shaders", which is something we desperately need to start doing.
* Each "internal asset" needed a UUID to be defined in-code to reference
it. This was very manual and toilsome.
The new `embedded` `AssetSource` enables the following pattern:
```rust
// Called in `crates/bevy_pbr/src/render/mesh.rs`
embedded_asset!(app, "mesh.wgsl");
// later in the app
let shader: Handle<Shader> = asset_server.load("embedded://bevy_pbr/render/mesh.wgsl");
```
Notice that this always treats the crate name as the "root path", and it
trims out the `src` path for brevity. This is generally predictable, but
if you need to debug you can use the new `embedded_path!` macro to get a
`PathBuf` that matches the one used by `embedded_asset`.
You can also reference embedded assets in arbitrary assets, such as WGSL
shaders:
```rust
#import "embedded://bevy_pbr/render/mesh.wgsl"
```
This also makes `embedded` assets go through the "normal" asset
lifecycle. They are only loaded when they are actually used!
We are also discussing implicitly converting asset paths to/from shader
modules, so in the future (not in this PR) you might be able to load it
like this:
```rust
#import bevy_pbr::render::mesh::Vertex
```
Compare that to the old system!
```rust
pub const MESH_SHADER_HANDLE: Handle<Shader> = Handle::weak_from_u128(3252377289100772450);
load_internal_asset!(app, MESH_SHADER_HANDLE, "mesh.wgsl", Shader::from_wgsl);
// The mesh asset is the _only_ accessible via MESH_SHADER_HANDLE and _cannot_ be loaded via the AssetServer.
```
## Hot Reloading `embedded`
You can enable `embedded` hot reloading by enabling the
`embedded_watcher` cargo feature:
```
cargo run --features=embedded_watcher
```
## Improved Hot Reloading Workflow
First: the `filesystem_watcher` cargo feature has been renamed to
`file_watcher` for brevity (and to match the `FileAssetReader` naming
convention).
More importantly, hot asset reloading is no longer configured in-code by
default. If you enable any asset watcher feature (such as `file_watcher`
or `rust_source_watcher`), asset watching will be automatically enabled.
This removes the need to _also_ enable hot reloading in your app code.
That means you can replace this:
```rust
app.add_plugins(DefaultPlugins.set(AssetPlugin::default().watch_for_changes()))
```
with this:
```rust
app.add_plugins(DefaultPlugins)
```
If you want to hot reload assets in your app during development, just
run your app like this:
```
cargo run --features=file_watcher
```
This means you can use the same code for development and deployment! To
deploy an app, just don't include the watcher feature
```
cargo build --release
```
My intent is to move to this approach for pretty much all dev workflows.
In a future PR I would like to replace `AssetMode::ProcessedDev` with a
`runtime-processor` cargo feature. We could then group all common "dev"
cargo features under a single `dev` feature:
```sh
# this would enable file_watcher, embedded_watcher, runtime-processor, and more
cargo run --features=dev
```
## AssetMode
`AssetPlugin::Unprocessed`, `AssetPlugin::Processed`, and
`AssetPlugin::ProcessedDev` have been replaced with an `AssetMode` field
on `AssetPlugin`.
```rust
// before
app.add_plugins(DefaultPlugins.set(AssetPlugin::Processed { /* fields here */ })
// after
app.add_plugins(DefaultPlugins.set(AssetPlugin { mode: AssetMode::Processed, ..default() })
```
This aligns `AssetPlugin` with our other struct-like plugins. The old
"source" and "destination" `AssetProvider` fields in the enum variants
have been replaced by the "asset source" system. You no longer need to
configure the AssetPlugin to "point" to custom asset providers.
## AssetServerMode
To improve the implementation of **Multiple Asset Sources**,
`AssetServer` was made aware of whether or not it is using "processed"
or "unprocessed" assets. You can check that like this:
```rust
if asset_server.mode() == AssetServerMode::Processed {
/* do something */
}
```
Note that this refactor should also prepare the way for building "one to
many processed output files", as it makes the server aware of whether it
is loading from processed or unprocessed sources. Meaning we can store
and read processed and unprocessed assets differently!
## AssetPath can now refer to folders
The "file only" restriction has been removed from `AssetPath`. The
`AssetServer::load_folder` API now accepts an `AssetPath` instead of a
`Path`, meaning you can load folders from other asset sources!
## Improved AssetPath Parsing
AssetPath parsing was reworked to support sources, improve error
messages, and to enable parsing with a single pass over the string.
`AssetPath::new` was replaced by `AssetPath::parse` and
`AssetPath::try_parse`.
## AssetWatcher broken out from AssetReader
`AssetReader` is no longer responsible for constructing `AssetWatcher`.
This has been moved to `AssetSourceBuilder`.
## Duplicate Event Debouncing
Asset V2 already debounced duplicate filesystem events, but this was
_input_ events. Multiple input event types can produce the same _output_
`AssetSourceEvent`. Now that we have `embedded_watcher`, which does
expensive file io on events, it made sense to debounce output events
too, so I added that! This will also benefit the AssetProcessor by
preventing integrity checks for duplicate events (and helps keep the
noise down in trace logs).
## Next Steps
* **Port Built-in Shaders**: Currently the primary (and essentially
only) user of `load_interal_asset` in Bevy's source code is "built-in
shaders". I chose not to do that in this PR for a few reasons:
1. We need to add the ability to pass shader defs in to shaders via meta
files. Some shaders (such as MESH_VIEW_TYPES) need to pass shader def
values in that are defined in code.
2. We need to revisit the current shader module naming system. I think
we _probably_ want to imply modules from source structure (at least by
default). Ideally in a way that can losslessly convert asset paths
to/from shader modules (to enable the asset system to resolve modules
using the asset server).
3. I want to keep this change set minimal / get this merged first.
* **Deprecate `load_internal_asset`**: we can't do that until we do (1)
and (2)
* **Relative Asset Paths**: This PR significantly increases the need for
relative asset paths (which was already pretty high). Currently when
loading dependencies, it is assumed to be an absolute path, which means
if in an `AssetLoader` you call `context.load("some/path/image.png")` it
will assume that is the "default" asset source, _even if the current
asset is in a different asset source_. This will cause breakage for
AssetLoaders that are not designed to add the current source to whatever
paths are being used. AssetLoaders should generally not need to be aware
of the name of their current asset source, or need to think about the
"current asset source" generally. We should build apis that support
relative asset paths and then encourage using relative paths as much as
possible (both via api design and docs). Relative paths are also
important because they will allow developers to move folders around
(even across providers) without reprocessing, provided there is no path
breakage.
# Objective
- According to the GLTF spec, it should not be possible to have a non
skinned mesh on a skinned node
> When the node contains skin, all mesh.primitives MUST contain JOINTS_0
and WEIGHTS_0 attributes
>
https://registry.khronos.org/glTF/specs/2.0/glTF-2.0.html#reference-node
- However, the reverse (a skinned mesh on a non skinned node) is just a
warning, see `NODE_SKINNED_MESH_WITHOUT_SKIN` in
https://github.com/KhronosGroup/glTF-Validator/blob/main/ISSUES.md#linkerror
- This causes a crash in Bevy because the bind group layout is made from
the mesh which is skinned, but filled from the entity which is not
```
thread '<unnamed>' panicked at 'wgpu error: Validation Error
Caused by:
In a RenderPass
note: encoder = `<CommandBuffer-(0, 5, Metal)>`
In a set_bind_group command
note: bind group = `<BindGroup-(27, 1, Metal)>`
Bind group 2 expects 2 dynamic offsets. However 1 dynamic offset were provided.
```
- Blender can export GLTF files with this kind of issues
## Solution
- When a skinned mesh is only used on non skinned nodes, ignore skinned
information from the mesh and warn the user (this is what three.js is
doing)
- When a skinned mesh is used on both skinned and non skinned nodes, log
an error
# Objective
- Enable capturing screenshots of all examples in CI on a GitHub runner
## Solution
- Shorten duration of a run
- Disable `desktop_app` mode - as there isn't any input in CI, examples
using this take way too long to run
- Change the default `ClusterConfig` - the runner are not able to do all
the clusters with the default settings
- Send extra `WindowResized` events - this is needed only for the
`split_screen` example, because CI doesn't trigger that event unlike all
the other platforms
---------
Co-authored-by: Rob Parrett <robparrett@gmail.com>
# Objective
Fixes#9676
Possible alternative to #9708
`Text2dBundles` are not currently drawn because the render-world-only
entities for glyphs that are created in `extract_text2d_sprite` are not
tracked by the per-view `VisibleEntities`.
## Solution
Add an `Option<Entity>` to `ExtractedSprite` that keeps track of the
original entity that caused a "glyph entity" to be created.
Use that in `queue_sprites` if it exists when checking view visibility.
## Benchmarks
Quick benchmarks. Average FPS over 1500 frames.
| bench | before fps | after fps | diff |
|-|-|-|-|
|many_sprites|884.93|879.00|🟡 -0.7%|
|bevymark -- --benchmark --waves 100 --per-wave 1000 --mode
sprite|75.99|75.93|🟡 -0.1%|
|bevymark -- --benchmark --waves 50 --per-wave 1000 --mode
mesh2d|32.85|32.58|🟡 -0.8%|
# Objective
cleanup some pbr shader code. improve shader stage io consistency and
make pbr.wgsl (probably many people's first foray into bevy shader code)
a little more human-readable. also fix a couple of small issues with
deferred rendering.
## Solution
mesh_vertex_output:
- rename to forward_io (to align with prepass_io)
- rename `MeshVertexOutput` to `VertexOutput` (to align with prepass_io)
- move `Vertex` from mesh.wgsl into here (to align with prepass_io)
prepass_io:
- remove `FragmentInput`, use `VertexOutput` directly (to align with
forward_io)
- rename `VertexOutput::clip_position` to `position` (to align with
forward_io)
pbr.wgsl:
- restructure so we don't need `#ifdefs` on the actual entrypoint, use
VertexOutput and FragmentOutput in all cases and use #ifdefs to import
the right struct definitions.
- rearrange to make the flow clearer
- move alpha_discard up from `pbr_functions::pbr` to avoid needing to
call it on some branches and not others
- add a bunch of comments
deferred_lighting:
- move ssao into the `!unlit` block to reflect forward behaviour
correctly
- fix compile error with deferred + premultiply_alpha
## Migration Guide
in custom material shaders:
- `pbr_functions::pbr` no longer calls to
`pbr_functions::alpha_discard`. if you were using the `pbr` function in
a custom shader with alpha mask mode you now also need to call
alpha_discard manually
- rename imports of `bevy_pbr::mesh_vertex_output` to
`bevy_pbr::forward_io`
- rename instances of `MeshVertexOutput` to `VertexOutput`
in custom material prepass shaders:
- rename instances of `VertexOutput::clip_position` to
`VertexOutput::position`
# Objective
Fixes#10069
## Solution
Extracted UI nodes were previously stored in a `SparseSet` and had a
predictable iteration order. UI borders and outlines relied on this. Now
they are stored in a HashMap and that is no longer true.
This adds `entity.index()` to the sort key for `TransparentUi` so that
the iteration order is predictable and the "border entities" that get
spawned during extraction are guaranteed to get drawn after their
respective container nodes again.
I **think** that everything still works for overlapping ui nodes etc,
because the z value / primary sort is still controlled by the "ui
stack."
Text above is just my current understanding. A rendering expert should
check this out.
I will do some more testing when I can.