# Objective
Fixes#4133
## Solution
Add comparisons to make sure we don't dereference `Mut<>` in the two places where `Transform` is being mutated. `GlobalTransform` implementation already works properly so fixing Transform automatically fixed that as well.
## Objective
Currently, all directional and point lights have their viewing frusta recalculated every frame, even if they have not moved or been disabled/enabled.
## Solution
The relevant systems now make use of change detection to only update those lights whose viewing frusta may have changed.
Bumps [actions/upload-artifact](https://github.com/actions/upload-artifact) from 1 to 3.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/actions/upload-artifact/releases">actions/upload-artifact's releases</a>.</em></p>
<blockquote>
<h2>v3.0.0</h2>
<h2>What's Changed</h2>
<ul>
<li>Update default runtime to node16 (<a href="https://github-redirect.dependabot.com/actions/upload-artifact/issues/293">#293</a>)</li>
<li>Update package-lock.json file version to 2 (<a href="https://github-redirect.dependabot.com/actions/upload-artifact/issues/302">#302</a>)</li>
</ul>
<h3>Breaking Changes</h3>
<p>With the update to Node 16, all scripts will now be run with Node 16 rather than Node 12.</p>
<h2>v2.3.1</h2>
<p>Fix for empty fails on Windows failing on upload <a href="https://github-redirect.dependabot.com/actions/upload-artifact/issues/281">#281</a></p>
<h2>v2.3.0 Upload Artifact</h2>
<ul>
<li>Optimizations for faster uploads of larger files that are already compressed</li>
<li>Significantly improved logging when there are chunked uploads</li>
<li>Clarifications in logs around the upload size and prohibited characters that aren't allowed in the artifact name or any uploaded files</li>
<li>Various other small bugfixes & optimizations</li>
</ul>
<h2>v2.2.4</h2>
<ul>
<li>Retry on HTTP 500 responses from the service</li>
</ul>
<h2>v2.2.3</h2>
<ul>
<li>Fixes for proxy related issues</li>
</ul>
<h2>v2.2.2</h2>
<ul>
<li>Improved retryability and error handling</li>
</ul>
<h2>v2.2.1</h2>
<ul>
<li>Update used actions/core package to the latest version</li>
</ul>
<h2>v2.2.0</h2>
<ul>
<li>Support for artifact retention</li>
</ul>
<h2>v2.1.4</h2>
<ul>
<li>Add Third Party License Information</li>
</ul>
<h2>v2.1.3</h2>
<ul>
<li>Use updated version of the <code>@action/artifact</code> NPM package</li>
</ul>
<h2>v2.1.2</h2>
<ul>
<li>Increase upload chunk size from 4MB to 8MB</li>
<li>Detect case insensitive file uploads</li>
</ul>
<h2>v2.1.1</h2>
<ul>
<li>Fix for certain symlinks not correctly being identified as directories before starting uploads</li>
</ul>
<h2>v2.1.0</h2>
<ul>
<li>Support for uploading artifacts with multiple paths</li>
<li>Support for using exclude paths</li>
<li>Updates to dependencies</li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="6673cd052c"><code>6673cd0</code></a> Update <code>lockfileVersion</code> in <code>package-lock.json</code> (<a href="https://github-redirect.dependabot.com/actions/upload-artifact/issues/302">#302</a>)</li>
<li><a href="2244c82003"><code>2244c82</code></a> Update to node16 (<a href="https://github-redirect.dependabot.com/actions/upload-artifact/issues/293">#293</a>)</li>
<li><a href="87348cee5f"><code>87348ce</code></a> Add 503 warning when uploading to the same artifact</li>
<li><a href="82c141cc51"><code>82c141c</code></a> Bump <code>@actions/artifact</code> to version 0.6.1 (<a href="https://github-redirect.dependabot.com/actions/upload-artifact/issues/286">#286</a>)</li>
<li><a href="da838ae959"><code>da838ae</code></a> Bump <code>@actions/artifact</code> to version 0.6.0 (<a href="https://github-redirect.dependabot.com/actions/upload-artifact/issues/280">#280</a>)</li>
<li><a href="f4ac36d205"><code>f4ac36d</code></a> Improve readme (<a href="https://github-redirect.dependabot.com/actions/upload-artifact/issues/278">#278</a>)</li>
<li><a href="5f375cca4b"><code>5f375cc</code></a> Document how to correctly use environment variables for path input (<a href="https://github-redirect.dependabot.com/actions/upload-artifact/issues/274">#274</a>)</li>
<li><a href="a009a66585"><code>a009a66</code></a> Create release-new-action-version.yml (<a href="https://github-redirect.dependabot.com/actions/upload-artifact/issues/277">#277</a>)</li>
<li><a href="b9bb65708e"><code>b9bb657</code></a> Bump tmpl from 1.0.4 to 1.0.5 (<a href="https://github-redirect.dependabot.com/actions/upload-artifact/issues/250">#250</a>)</li>
<li><a href="0b3de3e43b"><code>0b3de3e</code></a> Fix <code>README.md</code> links and some formatting updates (<a href="https://github-redirect.dependabot.com/actions/upload-artifact/issues/273">#273</a>)</li>
<li>Additional commits viewable in <a href="https://github.com/actions/upload-artifact/compare/v1...v3">compare view</a></li>
</ul>
</details>
<br />
[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=actions/upload-artifact&package-manager=github_actions&previous-version=1&new-version=3)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
</details>
Bumps [actions/labeler](https://github.com/actions/labeler) from 3 to 4.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/actions/labeler/releases">actions/labeler's releases</a>.</em></p>
<blockquote>
<h2>v4.0.0</h2>
<ul>
<li><a href="https://github-redirect.dependabot.com/actions/labeler/pull/319"> Update to node 16 runtime</a></li>
</ul>
<h2>v3.1.0</h2>
<p>Dependency upgrades, docs updates, and behind-the-scenes maintenance.</p>
<p>See the full list of changes since 3.0.2 here: <a href="https://github.com/actions/labeler/compare/v3.0.2...v3.1.0">https://github.com/actions/labeler/compare/v3.0.2...v3.1.0</a></p>
<p>Thanks to <a href="https://github.com/GisoBartels"><code>@GisoBartels</code></a> and <a href="https://github.com/Muhammed-Rahif"><code>@Muhammed-Rahif</code></a> for their contributions!</p>
<h2>v3.0.2</h2>
<p>No API changes, but lots of behind-the-scenes upgrades, mostly to internal dependencies, tooling, and surrounding infrastructure.</p>
<h2>v3.0.1</h2>
<p>No user-facing changes.</p>
<p>That said, there were more than enough behind-the-scenes changes to warrant a new release: <a href="https://github.com/actions/labeler/compare/v3.0.0...v3.0.1">https://github.com/actions/labeler/compare/v3.0.0...v3.0.1</a></p>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="9fd24f1f9d"><code>9fd24f1</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/actions/labeler/issues/334">#334</a> from actions/thboop/updatetov4</li>
<li><a href="abae52fa24"><code>abae52f</code></a> Update to new v4 release</li>
<li><a href="e1132f5ed5"><code>e1132f5</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/actions/labeler/issues/319">#319</a> from actions/thboop/node16update</li>
<li><a href="b0cc574b25"><code>b0cc574</code></a> Update build_test.yml</li>
<li><a href="c48e531900"><code>c48e531</code></a> Update default runtime to node16</li>
<li>See full diff in <a href="https://github.com/actions/labeler/compare/v3...v4">compare view</a></li>
</ul>
</details>
<br />
[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=actions/labeler&package-manager=github_actions&previous-version=3&new-version=4)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
</details>
# Objective
- Improve documentation.
- Provide helper functions for common uses of `Windows` relating to getting the primary `Window`.
- Reduce repeated `Window` code.
# Solution
- Adds infallible `primary()` and `primary_mut()` functions with standard error text. This replaces the commonly used `get_primary().unwrap()` seen throughout bevy which has inconsistent or nonexistent error messages.
- Adds `scale_factor(WindowId)` to replace repeated code blocks throughout.
# Considerations
- The added functions can panic if the primary window does not exist.
- It is very uncommon for the primary window to not exist, as seen by the regular use of `get_primary().unwrap()`. Most users will have a single window and will need to reference the primary window in their code multiple times.
- The panic provides a consistent error message to make this class of error easy to spot from the panic text.
- This follows the established standard of short names for infallible-but-unlikely-to-panic functions in bevy.
- Removes line noise for common usage of `Windows`.
# Objective
Fixes#3744
## Solution
The old code used the formula `normal . center + d + radius <= 0` to determine if the sphere with center `center` and radius `radius` is outside the plane with normal `normal` and distance from origin `d`. This only works if `normal` is normalized, which is not necessarily the case. Instead, `normal` and `d` are both multiplied by some factor that `radius` isn't multiplied by. So the additional code multiplied `radius` by that factor.
## Objective
A step towards `f64` `Transform`s (#1680). For now, I am rolling my own `Transform`. But in order to derive Reflect, I specifically need `DQuat` to be reflectable.
```rust
#[derive(Component, Reflect, Copy, Clone, PartialEq, Debug)]
#[reflect(Component, PartialEq)]
pub struct Transform {
pub translation: DVec3,
pub rotation: DQuat, // error: the trait `bevy::prelude::Reflect` is not implemented for `DQuat`
pub scale: DVec3,
}
```
## Solution
I have added a `DQuat` impl for `Reflect` alongside the other glam impls. I've also added impls for `DMat3` and `DMat4` to match.
# Objective
- Reduce power usage for games when not focused.
- Reduce power usage to ~0 when a desktop application is minimized (opt-in).
- Reduce power usage when focused, only updating on a `winit` event, or the user sends a redraw request. (opt-in)
https://user-images.githubusercontent.com/2632925/156904387-ec47d7de-7f06-4c6f-8aaf-1e952c1153a2.mp4
Note resource usage in the Task Manager in the above video.
## Solution
- Added a type `UpdateMode` that allows users to specify how the winit event loop is updated, without exposing winit types.
- Added two fields to `WinitConfig`, both with the `UpdateMode` type. One configures how the application updates when focused, and the other configures how the application behaves when it is not focused. Users can modify this resource manually to set the type of event loop control flow they want.
- For convenience, two functions were added to `WinitConfig`, that provide reasonable presets: `game()` (default) and `desktop_app()`.
- The `game()` preset, which is used by default, is unchanged from current behavior with one exception: when the app is out of focus the app updates at a minimum of 10fps, or every time a winit event is received. This has a huge positive impact on power use and responsiveness on my machine, which will otherwise continue running the app at many hundreds of fps when out of focus or minimized.
- The `desktop_app()` preset is fully reactive, only updating when user input (winit event) is supplied or a `RedrawRequest` event is sent. When the app is out of focus, it only updates on `Window` events - i.e. any winit event that directly interacts with the window. What this means in practice is that the app uses *zero* resources when minimized or not interacted with, but still updates fluidly when the app is out of focus and the user mouses over the application.
- Added a `RedrawRequest` event so users can force an update even if there are no events. This is useful in an application when you want to, say, run an animation even when the user isn't providing input.
- Added an example `low_power` to demonstrate these changes
## Usage
Configuring the event loop:
```rs
use bevy::winit::{WinitConfig};
// ...
.insert_resource(WinitConfig::desktop_app()) // preset
// or
.insert_resource(WinitConfig::game()) // preset
// or
.insert_resource(WinitConfig{ .. }) // manual
```
Requesting a redraw:
```rs
use bevy:🪟:RequestRedraw;
// ...
fn request_redraw(mut event: EventWriter<RequestRedraw>) {
event.send(RequestRedraw);
}
```
## Other details
- Because we have a single event loop for multiple windows, every time I've mentioned "focused" above, I more precisely mean, "if at least one bevy window is focused".
- Due to a platform bug in winit (https://github.com/rust-windowing/winit/issues/1619), we can't simply use `Window::request_redraw()`. As a workaround, this PR will temporarily set the window mode to `Poll` when a redraw is requested. This is then reset to the user's `WinitConfig` setting on the next frame.
# Objective
Currently, errors in the render graph runner are exposed via a `Result::unwrap()` panic message, which dumps the debug representation of the error.
## Solution
This PR updates `render_system` to log the chain of errors, followed by an explicit panic:
```
ERROR bevy_render::renderer: Error running render graph:
ERROR bevy_render::renderer: > encountered an error when running a sub-graph
ERROR bevy_render::renderer: > tried to pass inputs to sub-graph "outline_graph", which has no input slots
thread 'main' panicked at 'Error running render graph: encountered an error when running a sub-graph', /[redacted]/bevy/crates/bevy_render/src/renderer/mod.rs:44:9
```
Some errors' `Display` impls (via `thiserror`) have also been updated to provide more detail about the cause of the error.
# Objective
- `Assets<T>::iter_mut` sends `Modified` event for all assets first, then returns the iterator
- This means that events could be sent for assets that would not have been mutated if iteration was stopped before
## Solution
- Send `Modified` event when assets are iterated over.
Co-authored-by: François <8672791+mockersf@users.noreply.github.com>
# Objective
- Make the many_cubes example more interesting (and look more like many_sprites)
## Solution
- Actually display many cubes
- Move the camera around
Not sure if this is the new format, but quickly looking I couldn't wind a easy way to update it in the book
# Objective
* For sample project to compile after first pages of book
got:
```
= note: clang: error: invalid linker name in argument '-fuse-ld=/usr/local/bin/zld'
error: linking with `cc` failed: exit status: 1
```
## Solution
FIx to correct zld path
This makes it possible for materials to configure front or
back face culling, or disable culling.
Initially I looked at specializing the Mesh which currently
controls this state but conceptually it seems more appropriate
to control this at the material level, not the mesh level.
_Just for reference this also seems to be consistent with Unity
where materials/shaders can configure the culling mode between
front/back/off - as opposed to configuring any culling state
when importing a mesh._
After some archaeology, trying to understand how this might
relate to the existing 'double_sided' option, it was determined
that double_sided is a more high level lighting option originally
from Filament that will cause the normals for back faces to be
flipped.
For sake of avoiding complexity, but keeping control this
currently keeps the options orthogonal, and adds some clarifying
documentation for `double_sided`. This won't affect any existing
apps since there hasn't been a way to disable backface culling
up until now, so the option was essentially redundant.
double_sided support could potentially be updated to imply
disabling of backface culling.
For reference https://github.com/bevyengine/bevy/pull/3734/commits also looks at exposing cull mode control. I think the main difference here is that this patch handles RenderPipelineDescriptor specialization directly within the StandardMaterial implementation instead of communicating info back to the Mesh via the `queue_material_meshes` system.
With the way material.rs builds up the final RenderPipelineDescriptor first by calling specialize for the MeshPipeline followed by specialize for the material then it seems like we have a natural place to override anything in the descriptor that's first configured for the mesh state.
# Objective
Add Visibility for lights
## Solution
- add Visibility to PointLightBundle and DirectionLightBundle
- filter lights used by Visibility.is_visible
note: includes changes from #3916 due to overlap, will be cleaner after that is merged
# Objective
When developing plugins, I very often come up to the need to have logging information printed out. The exact syntax is a bit cryptic, and takes some time to find the documentation.
Also a minor typo fix in `It has the same syntax as` part
## Solution
Adding a direct example in the module level information for both:
1. Enabling a specific level (`trace` in the example) for a module and all its subsystems at App init
2. Doing the same from console, when launching the application
Adds a `default()` shorthand for `Default::default()` ... because life is too short to constantly type `Default::default()`.
```rust
use bevy::prelude::*;
#[derive(Default)]
struct Foo {
bar: usize,
baz: usize,
}
// Normally you would do this:
let foo = Foo {
bar: 10,
..Default::default()
};
// But now you can do this:
let foo = Foo {
bar: 10,
..default()
};
```
The examples have been adapted to use `..default()`. I've left internal crates as-is for now because they don't pull in the bevy prelude, and the ergonomics of each case should be considered individually.
# Objective
fix#3915
## Solution
the issues are caused by
- lights are assigned to clusters before being filtered down to MAX_POINT_LIGHTS, leading to cluster counts potentially being too high
- after fixing the above, packing the count into 8 bits still causes overflow with exactly 256 lights affecting a cluster
to fix:
```assign_lights_to_clusters```
- limit extracted lights to MAX_POINT_LIGHTS, selecting based on shadow-caster & intensity (if required)
- warn if MAX_POINT_LIGHT count is exceeded
```prepare_lights```
- limit the lights assigned to a cluster to CLUSTER_COUNT_MASK (which is 1 less than MAX_POINT_LIGHTS) to avoid overflowing into the offset bits
notes:
- a better solution to the overflow may be to use more than 8 bits for cluster_count (the comment states only 14 of the remaining 24 bits are used for the offset). this would touch more of the code base but i'm happy to try if it has some benefit.
- intensity is only one way to select lights. it may be worth allowing user configuration of the light filtering, but i can't see a clean way to do that
# Objective
- Add ways to control how audio is played
## Solution
- playing a sound will return a (weak) handle to an asset that can be used to control playback
- if the asset is dropped, it will detach the sink (same behaviour as now)
# Objective
- Help debug panics
## Solution
- Insert a custom panic hook when trace is enabled that will log spans
example when running a command on a despawned entity
before:
```
thread 'main' panicked at 'Could not add a component (of type `panic::Marker`) to entity 1v0 because it doesn't exist in this World.
If this command was added to a newly spawned entity, ensure that you have not despawned that entity within the same stage.
This may have occurred due to system order ambiguity, or if the spawning system has multiple command buffers', /bevy/crates/bevy_ecs/src/system/commands/mod.rs:664:13
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
```
after:
```
0: bevy_ecs::schedule::stage::system_commands
with name="panic::my_bad_system"
at crates/bevy_ecs/src/schedule/stage.rs:871
1: bevy_ecs::schedule::stage
with name=Update
at crates/bevy_ecs/src/schedule/mod.rs:340
2: bevy_app::app::frame
at crates/bevy_app/src/app.rs:111
3: bevy_app::app::bevy_app
at crates/bevy_app/src/app.rs:126
thread 'main' panicked at 'Could not add a component (of type `panic::Marker`) to entity 1v0 because it doesn't exist in this World.
If this command was added to a newly spawned entity, ensure that you have not despawned that entity within the same stage.
This may have occurred due to system order ambiguity, or if the spawning system has multiple command buffers', /bevy/crates/bevy_ecs/src/system/commands/mod.rs:664:13
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
```
# Objective
- Optimize assign_lights_to_clusters
## Solution
- Avoid inserting entities into hash sets in inner loops when it is known they will be inserted in at least one iteration of the loop.
- Use a Vec instead of a hash set where the set is not needed
- Avoid explicit calculation of the cluster_index from x,y,z coordinates, instead using row and column offsets and just adding z in the inner loop
- These changes cut the time spent in the system roughly in half
# Objective
- Currently there is now way of making an indirect draw call from a tracked render pass.
- This is a very useful feature for GPU based rendering.
## Solution
- Expose the `draw_indirect` and `draw_indexed_indirect` methods from the wgpu `RenderPass` in the `TrackedRenderPass`.
## Alternative
- #3595: Expose the underlying `RenderPass` directly
# Objective
- In the large majority of cases, users were calling `.unwrap()` immediately after `.get_resource`.
- Attempting to add more helpful error messages here resulted in endless manual boilerplate (see #3899 and the linked PRs).
## Solution
- Add an infallible variant named `.resource` and so on.
- Use these infallible variants over `.get_resource().unwrap()` across the code base.
## Notes
I did not provide equivalent methods on `WorldCell`, in favor of removing it entirely in #3939.
## Migration Guide
Infallible variants of `.get_resource` have been added that implicitly panic, rather than needing to be unwrapped.
Replace `world.get_resource::<Foo>().unwrap()` with `world.resource::<Foo>()`.
## Impact
- `.unwrap` search results before: 1084
- `.unwrap` search results after: 942
- internal `unwrap_or_else` calls added: 4
- trivial unwrap calls removed from tests and code: 146
- uses of the new `try_get_resource` API: 11
- percentage of the time the unwrapping API was used internally: 93%
@BlackPhlox kindly pointed out and resolved a couple of inconsistencies in the bevy logo:
* The arc of the first bird's back had three vertices right next to each other, which created a noticeable sharp edge. This replaces them with a single vertex.
* The bottom part of the tail had a sharp edge, which was inconsistent with the top part of the tail. This was rounded out to mirror the top part.
I also took the chance to clean up some of the variants and (hopefully) improve the "bevy_logo_light_dark_and_dimmed" variant to improve how it renders on dark themes.
# Objective
Continuation of #2663 due to git problems - better documentation for Query::par_for_each and par_for_each_mut
## Solution
Going into more detail about the function parameters
All other examples dont have "2d" prefix in their names (even though they are in 2d folder) and reading README makes user think that example is named "rotation" not "2d_rotation" hence rename PR
# Objective
- Remove discrepancy between example name in documentation and in cargo
## Solution
- Rename example in cargo file
# Objective
The `#[reflect_trait]` macro did not maintain the visibility of its trait. It also did not make its accessor methods public, which made them inaccessible outside the current module.
## Solution
Made the `Reflect***` struct match the visibility of its trait and made both the `get` and `get_mut` methods always public.
# Objective
- Fix the ugliness of the `config` api.
- Supercedes #2440, #2463, #2491
## Solution
- Since #2398, capturing closure systems have worked.
- Use those instead where we needed config before
- Remove the rest of the config api.
- Related: #2777
# Objective
rg3d has been renamed to Fyrox for a little while now and we should reflect that. It's not particularly urgent since the old link redirects to Fyrox.
## Solution
Rename rg3d to Fyrox in the README
# Objective
`Vec3A` is does not implement `Reflect`. This is generally useful for `Reflect` derives using `Vec3A` fields, and may speed up some animation blending use cases.
## Solution
Extend the existing macro uses to include `Vec3A`.
# Objective
- Fixes#4010, as well as any similar issues in this class.
- Winit functions used outside of the main thread can cause the application to unexpectedly hang.
## Solution
- Make the `WinitWindows` resource `!Send`.
- This ensures that any systems that use `WinitWindows` must either be exclusive (run on the main thread), or the resource is explicitly marked with the `NonSend` parameter in user systems.
# Objective
Will fix#3377 and #3254
## Solution
Use an enum to represent either a `WindowId` or `Handle<Image>` in place of `Camera::window`.
Co-authored-by: Carter Anderson <mcanders1@gmail.com>
# Objective
- Closes#786
- Closes#2252
- Closes#2588
This PR implements a derive macro that allows users to define their queries as structs with named fields.
## Example
```rust
#[derive(WorldQuery)]
#[world_query(derive(Debug))]
struct NumQuery<'w, T: Component, P: Component> {
entity: Entity,
u: UNumQuery<'w>,
generic: GenericQuery<'w, T, P>,
}
#[derive(WorldQuery)]
#[world_query(derive(Debug))]
struct UNumQuery<'w> {
u_16: &'w u16,
u_32_opt: Option<&'w u32>,
}
#[derive(WorldQuery)]
#[world_query(derive(Debug))]
struct GenericQuery<'w, T: Component, P: Component> {
generic: (&'w T, &'w P),
}
#[derive(WorldQuery)]
#[world_query(filter)]
struct NumQueryFilter<T: Component, P: Component> {
_u_16: With<u16>,
_u_32: With<u32>,
_or: Or<(With<i16>, Changed<u16>, Added<u32>)>,
_generic_tuple: (With<T>, With<P>),
_without: Without<Option<u16>>,
_tp: PhantomData<(T, P)>,
}
fn print_nums_readonly(query: Query<NumQuery<u64, i64>, NumQueryFilter<u64, i64>>) {
for num in query.iter() {
println!("{:#?}", num);
}
}
#[derive(WorldQuery)]
#[world_query(mutable, derive(Debug))]
struct MutNumQuery<'w, T: Component, P: Component> {
i_16: &'w mut i16,
i_32_opt: Option<&'w mut i32>,
}
fn print_nums(mut query: Query<MutNumQuery, NumQueryFilter<u64, i64>>) {
for num in query.iter_mut() {
println!("{:#?}", num);
}
}
```
## TODOs:
- [x] Add support for `&T` and `&mut T`
- [x] Test
- [x] Add support for optional types
- [x] Test
- [x] Add support for `Entity`
- [x] Test
- [x] Add support for nested `WorldQuery`
- [x] Test
- [x] Add support for tuples
- [x] Test
- [x] Add support for generics
- [x] Test
- [x] Add support for query filters
- [x] Test
- [x] Add support for `PhantomData`
- [x] Test
- [x] Refactor `read_world_query_field_type_info`
- [x] Properly document `readonly` attribute for nested queries and the static assertions that guarantee safety
- [x] Test that we never implement `ReadOnlyFetch` for types that need mutable access
- [x] Test that we insert static assertions for nested `WorldQuery` that a user marked as readonly
This PR makes a number of changes to how meshes and vertex attributes are handled, which the goal of enabling easy and flexible custom vertex attributes:
* Reworks the `Mesh` type to use the newly added `VertexAttribute` internally
* `VertexAttribute` defines the name, a unique `VertexAttributeId`, and a `VertexFormat`
* `VertexAttributeId` is used to produce consistent sort orders for vertex buffer generation, replacing the more expensive and often surprising "name based sorting"
* Meshes can be used to generate a `MeshVertexBufferLayout`, which defines the layout of the gpu buffer produced by the mesh. `MeshVertexBufferLayouts` can then be used to generate actual `VertexBufferLayouts` according to the requirements of a specific pipeline. This decoupling of "mesh layout" vs "pipeline vertex buffer layout" is what enables custom attributes. We don't need to standardize _mesh layouts_ or contort meshes to meet the needs of a specific pipeline. As long as the mesh has what the pipeline needs, it will work transparently.
* Mesh-based pipelines now specialize on `&MeshVertexBufferLayout` via the new `SpecializedMeshPipeline` trait (which behaves like `SpecializedPipeline`, but adds `&MeshVertexBufferLayout`). The integrity of the pipeline cache is maintained because the `MeshVertexBufferLayout` is treated as part of the key (which is fully abstracted from implementers of the trait ... no need to add any additional info to the specialization key).
* Hashing `MeshVertexBufferLayout` is too expensive to do for every entity, every frame. To make this scalable, I added a generalized "pre-hashing" solution to `bevy_utils`: `Hashed<T>` keys and `PreHashMap<K, V>` (which uses `Hashed<T>` internally) . Why didn't I just do the quick and dirty in-place "pre-compute hash and use that u64 as a key in a hashmap" that we've done in the past? Because its wrong! Hashes by themselves aren't enough because two different values can produce the same hash. Re-hashing a hash is even worse! I decided to build a generalized solution because this pattern has come up in the past and we've chosen to do the wrong thing. Now we can do the right thing! This did unfortunately require pulling in `hashbrown` and using that in `bevy_utils`, because avoiding re-hashes requires the `raw_entry_mut` api, which isn't stabilized yet (and may never be ... `entry_ref` has favor now, but also isn't available yet). If std's HashMap ever provides the tools we need, we can move back to that. Note that adding `hashbrown` doesn't increase our dependency count because it was already in our tree. I will probably break these changes out into their own PR.
* Specializing on `MeshVertexBufferLayout` has one non-obvious behavior: it can produce identical pipelines for two different MeshVertexBufferLayouts. To optimize the number of active pipelines / reduce re-binds while drawing, I de-duplicate pipelines post-specialization using the final `VertexBufferLayout` as the key. For example, consider a pipeline that needs the layout `(position, normal)` and is specialized using two meshes: `(position, normal, uv)` and `(position, normal, other_vec2)`. If both of these meshes result in `(position, normal)` specializations, we can use the same pipeline! Now we do. Cool!
To briefly illustrate, this is what the relevant section of `MeshPipeline`'s specialization code looks like now:
```rust
impl SpecializedMeshPipeline for MeshPipeline {
type Key = MeshPipelineKey;
fn specialize(
&self,
key: Self::Key,
layout: &MeshVertexBufferLayout,
) -> RenderPipelineDescriptor {
let mut vertex_attributes = vec![
Mesh::ATTRIBUTE_POSITION.at_shader_location(0),
Mesh::ATTRIBUTE_NORMAL.at_shader_location(1),
Mesh::ATTRIBUTE_UV_0.at_shader_location(2),
];
let mut shader_defs = Vec::new();
if layout.contains(Mesh::ATTRIBUTE_TANGENT) {
shader_defs.push(String::from("VERTEX_TANGENTS"));
vertex_attributes.push(Mesh::ATTRIBUTE_TANGENT.at_shader_location(3));
}
let vertex_buffer_layout = layout
.get_layout(&vertex_attributes)
.expect("Mesh is missing a vertex attribute");
```
Notice that this is _much_ simpler than it was before. And now any mesh with any layout can be used with this pipeline, provided it has vertex postions, normals, and uvs. We even got to remove `HAS_TANGENTS` from MeshPipelineKey and `has_tangents` from `GpuMesh`, because that information is redundant with `MeshVertexBufferLayout`.
This is still a draft because I still need to:
* Add more docs
* Experiment with adding error handling to mesh pipeline specialization (which would print errors at runtime when a mesh is missing a vertex attribute required by a pipeline). If it doesn't tank perf, we'll keep it.
* Consider breaking out the PreHash / hashbrown changes into a separate PR.
* Add an example illustrating this change
* Verify that the "mesh-specialized pipeline de-duplication code" works properly
Please dont yell at me for not doing these things yet :) Just trying to get this in peoples' hands asap.
Alternative to #3120Fixes#3030
Co-authored-by: Carter Anderson <mcanders1@gmail.com>
# Objective
`all_tuples` panics when the start count is set to anything other than 0 or 1. Fix this bug.
## Solution
Originally part of #2381, this PR fixes the slice indexing used by the proc macro.
# Objective
- Fixes#4005
## Solution
- Include the `near` and `far` clipping values from the perspective projection in the `Camera` struct; before that, they were both being defaulted to 0.
# Context
I wanted to add a `texture` to my `ColorMaterial` without explicitly adding a `color`. To do this I used `..Default::default()` which in turn gave me unexpected results. I was expecting that my texture would render without any color modifications, but to my surprise it got rendered in a purple tint (`Color::rgb(1.0, 0.0, 1.0)`). To fix this I had to explicitly define the `color` using `color: Color::WHITE`.
## What I wanted to use
```rust
commands
.spawn_bundle(MaterialMesh2dBundle {
mesh: mesh_handle.clone().into(),
transform: Transform::default().with_scale(Vec3::splat(8.)),
material: materials.add(ColorMaterial {
texture: Some(texture_handle.clone()),
..Default::default() // here
}),
..Default::default()
})
```
![image](https://user-images.githubusercontent.com/75334794/154765141-4a8161ce-4ec8-4687-b7d5-18ddf1b58660.png)
## What I had to use instead
```rust
commands
.spawn_bundle(MaterialMesh2dBundle {
mesh: mesh_handle.clone().into(),
transform: Transform::default().with_scale(Vec3::splat(8.)),
material: materials.add(ColorMaterial {
texture: Some(texture_handle.clone()),
color: Color::WHITE, // here
}),
..Default::default()
})
```
![image](https://user-images.githubusercontent.com/75334794/154765225-f1508b41-9d5b-4f0c-af7b-e89c1a82d85b.png)
Adds "hot reloading" of internal assets, which is normally not possible because they are loaded using `include_str` / direct Asset collection access.
This is accomplished via the following:
* Add a new `debug_asset_server` feature flag
* When that feature flag is enabled, create a second App with a second AssetServer that points to a configured location (by default the `crates` folder). Plugins that want to add hot reloading support for their assets can call the new `app.add_debug_asset::<T>()` and `app.init_debug_asset_loader::<T>()` functions.
* Load "internal" assets using the new `load_internal_asset` macro. By default this is identical to the current "include_str + register in asset collection" approach. But if the `debug_asset_server` feature flag is enabled, it will also load the asset dynamically in the debug asset server using the file path. It will then set up a correlation between the "debug asset" and the "actual asset" by listening for asset change events.
This is an alternative to #3673. The goal was to keep the boilerplate and features flags to a minimum for bevy plugin authors, and allow them to home their shaders near relevant code.
This is a draft because I haven't done _any_ quality control on this yet. I'll probably rename things and remove a bunch of unwraps. I just got it working and wanted to use it to start a conversation.
Fixes#3660
This enables shaders to (optionally) define their import path inside their source. This has a number of benefits:
1. enables users to define their own custom paths directly in their assets
2. moves the import path "close" to the asset instead of centralized in the plugin definition, which seems "better" to me.
3. makes "internal hot shader reloading" way more reasonable (see #3966)
4. logically opens the door to importing "parts" of a shader by defining "import_path blocks".
```rust
#define_import_path bevy_pbr::mesh_struct
struct Mesh {
model: mat4x4<f32>;
inverse_transpose_model: mat4x4<f32>;
// 'flags' is a bit field indicating various options. u32 is 32 bits so we have up to 32 options.
flags: u32;
};
let MESH_FLAGS_SHADOW_RECEIVER_BIT: u32 = 1u;
```