Adds a new `Handle<Storage>` asset type that can be used as a render
asset, particularly for use with `AsBindGroup`.
Closes: #13658
# Objective
Allow users to create storage buffers in the main world without having
to access the `RenderDevice`. While this resource is technically
available, it's bad form to use in the main world and requires mixing
rendering details with main world code. Additionally, this makes storage
buffers easier to use with `AsBindGroup`, particularly in the following
scenarios:
- Sharing the same buffers between a compute stage and material shader.
We already have examples of this for storage textures (see game of life
example) and these changes allow a similar pattern to be used with
storage buffers.
- Preventing repeated gpu upload (see the previous easier to use `Vec`
`AsBindGroup` option).
- Allow initializing custom materials using `Default`. Previously, the
lack of a `Default` implement for the raw `wgpu::Buffer` type made
implementing a `AsBindGroup + Default` bound difficult in the presence
of buffers.
## Solution
Adds a new `Handle<Storage>` asset type that is prepared into a
`GpuStorageBuffer` render asset. This asset can either be initialized
with a `Vec<u8>` of properly aligned data or with a size hint. Users can
modify the underlying `wgpu::BufferDescriptor` to provide additional
usage flags.
## Migration Guide
The `AsBindGroup` `storage` attribute has been modified to reference the
new `Handle<Storage>` asset instead. Usages of Vec` should be converted
into assets instead.
---------
Co-authored-by: IceSentry <IceSentry@users.noreply.github.com>
Fixes#14993 (maybe). Adds a system ordering constraint that was missed
in the refactor in #14833. The theory here is that the single threaded
forces a topology that causes the prepare system to run before
`prepare_windows` in a way that causes issues. For whatever reason, this
appears to be unlikely when multi-threading is enabled.
# Objective
- Fixes#14974
## Solution
- Replace all* instances of `NonZero*` with `NonZero<*>`
## Testing
- CI passed locally.
---
## Notes
Within the `bevy_reflect` implementations for `std` types,
`impl_reflect_value!()` will continue to use the type aliases instead,
as it inappropriately parses the concrete type parameter as a generic
argument. If the `ZeroablePrimitive` trait was stable, or the macro
could be modified to accept a finite list of types, then we could fully
migrate.
# Objective
- Fixes#14841
## Solution
- Compute BufferSlice size manually and use it for comparison in
`TrackedRenderPass`
## Testing
- Gizmo example does not crash with #14721 (without system ordering),
and `slice` computes correct size there
---
## Migration Guide
- `TrackedRenderPass::set_vertex_buffer` function has been modified to
update vertex buffers when the same buffer with the same offset is
provided, but its size has changed. Some existing code may rely on the
previous behavior, which did not update the vertex buffer in this
scenario.
---------
Co-authored-by: Zachary Harrold <zac@harrold.com.au>
# Objective
The Android example on Adreno 642L currently crashes on startup.
Previous PRs #14176 and #13323 have adressed this specific crash
occurring on some Adreno GPUs, that fix works as it should but isn't
applied when to the GPU name contains a suffix like in the case of
`642L`.
## Solution
- Amending the logic to filter out any parts of the GPU name not
containing digits thus enabling the fix on `642L`.
## Testing
- Ran the Android example on a Nothing Phone 1. Before this change it
crashed, after it works as intended.
---------
Co-authored-by: Sam Pettersson <sam.pettersson@geoguessr.com>
# Objective
Fixes#14883
## Solution
Pretty simple update to `EntityCommands` methods to consume `self` and
return it rather than taking `&mut self`. The things probably worth
noting:
* I added `#[allow(clippy::should_implement_trait)]` to the `add` method
because it causes a linting conflict with `std::ops::Add`.
* `despawn` and `log_components` now return `Self`. I'm not sure if
that's exactly the desired behavior so I'm happy to adjust if that seems
wrong.
## Testing
Tested with `cargo run -p ci`. I think that should be sufficient to call
things good.
## Migration Guide
The most likely migration needed is changing code from this:
```
let mut entity = commands.get_or_spawn(entity);
if depth_prepass {
entity.insert(DepthPrepass);
}
if normal_prepass {
entity.insert(NormalPrepass);
}
if motion_vector_prepass {
entity.insert(MotionVectorPrepass);
}
if deferred_prepass {
entity.insert(DeferredPrepass);
}
```
to this:
```
let mut entity = commands.get_or_spawn(entity);
if depth_prepass {
entity = entity.insert(DepthPrepass);
}
if normal_prepass {
entity = entity.insert(NormalPrepass);
}
if motion_vector_prepass {
entity = entity.insert(MotionVectorPrepass);
}
if deferred_prepass {
entity.insert(DeferredPrepass);
}
```
as can be seen in several of the example code updates here. There will
probably also be instances where mutable `EntityCommands` vars no longer
need to be mutable.
# Objective
- Faster meshlet rasterization path for small triangles
- Avoid having to allocate and write out a triangle buffer
- Refactor gpu_scene.rs
## Solution
- Replace the 32bit visbuffer texture with a 64bit visbuffer buffer,
where the left 32 bits encode depth, and the right 32 bits encode the
existing cluster + triangle IDs. Can't use 64bit textures, wgpu/naga
doesn't support atomic ops on textures yet.
- Instead of writing out a buffer of packed cluster + triangle IDs (per
triangle) to raster, the culling pass now writes out a buffer of just
cluster IDs (per cluster, so less memory allocated, cheaper to write
out).
- Clusters for software raster are allocated from the left side
- Clusters for hardware raster are allocated in the same buffer, from
the right side
- The buffer size is fixed at MeshletPlugin build time, and should be
set to a reasonable value for your scene (no warning on overflow, and no
good way to determine what value you need outside of renderdoc - I plan
to fix this in a future PR adding a meshlet stats overlay)
- Currently I don't have a heuristic for software vs hardware raster
selection for each cluster. The existing code is just a placeholder. I
need to profile on a release scene and come up with a heuristic,
probably in a future PR.
- The culling shader is getting pretty hard to follow at this point, but
I don't want to spend time improving it as the entire shader/pass is
getting rewritten/replaced in the near future.
- Software raster is a compute workgroup per-cluster. Each workgroup
loads and transforms the <=64 vertices of the cluster, and then
rasterizes the <=64 triangles of the cluster.
- Two variants are implemented: Scanline for clusters with any larger
triangles (still smaller than hardware is good at), and brute-force for
very very tiny triangles
- Once the shader determines that a pixel should be filled in, it does
an atomicMax() on the visbuffer to store the results, copying how Nanite
works
- On devices with a low max workgroups per dispatch limit, an extra
compute pass is inserted before software raster to convert from a 1d to
2d dispatch (I don't think 3d would ever be necessary).
- I haven't implemented the top-left rule or subpixel precision yet, I'm
leaving that for a future PR since I get usable results without it for
now
- Resources used:
https://kristoffer-dyrkorn.github.io/triangle-rasterizer and chapters
6-8 of
https://fgiesen.wordpress.com/2013/02/17/optimizing-sw-occlusion-culling-index
- Hardware raster now spawns 64*3 vertex invocations per meshlet,
instead of the actual meshlet vertex count. Extra invocations just
early-exit.
- While this is slower than the existing system, hardware draws should
be rare now that software raster is usable, and it saves a ton of memory
using the unified cluster ID buffer. This would be fixed if wgpu had
support for mesh shaders.
- Instead of writing to a color+depth attachment, the hardware raster
pass also does the same atomic visbuffer writes that software raster
uses.
- We have to bind a dummy render target anyways, as wgpu doesn't
currently support render passes without any attachments
- Material IDs are no longer written out during the main rasterization
passes.
- If we had async compute queues, we could overlap the software and
hardware raster passes.
- New material and depth resolve passes run at the end of the visbuffer
node, and write out view depth and material ID depth textures
### Misc changes
- Fixed cluster culling importing, but never actually using the previous
view uniforms when doing occlusion culling
- Fixed incorrectly adding the LOD error twice when building the meshlet
mesh
- Splitup gpu_scene module into meshlet_mesh_manager, instance_manager,
and resource_manager
- resource_manager is still too complex and inefficient (extract and
prepare are way too expensive). I plan on improving this in a future PR,
but for now ResourceManager is mostly a 1:1 port of the leftover
MeshletGpuScene bits.
- Material draw passes have been renamed to the more accurate material
shade pass, as well as some other misc renaming (in the future, these
will be compute shaders even, and not actual draw calls)
---
## Migration Guide
- TBD (ask me at the end of the release for meshlet changes as a whole)
---------
Co-authored-by: vero <email@atlasdostal.com>
# Objective
When using instancing, 2 `VertexBufferLayout`s are needed, one for
per-vertex and one for per-instance data. Shader locations of all
attributes must not overlap, so one of the layouts needs to start its
locations at an offset. However,
`VertexBufferLayout::from_vertex_formats` will always start locations at
0, requiring manual adjustment, which is currently pretty verbose.
## Solution
Add `VertexBufferLayout::offset_locations`, which adds an offset to all
attribute locations.
Code using this method looks like this:
```rust
VertexState {
shader: BACKBUFFER_SHADER_HANDLE.typed(),
shader_defs: Vec::new(),
entry_point: "vertex".into(),
buffers: vec![
VertexBufferLayout::from_vertex_formats(
VertexStepMode::Vertex,
[VertexFormat::Float32x2],
),
VertexBufferLayout::from_vertex_formats(
VertexStepMode::Instance,
[VertexFormat::Float32x2, VertexFormat::Float32x3],
)
.offset_locations(1),
],
}
```
Alternative solutions include:
- Pass the starting location to `from_vertex_formats` – this is a bit
simpler than my solution here, but most calls don't need an offset, so
they'd always pass 0 there.
- Do nothing and make the user hand-write this.
---
## Changelog
- Add `VertexBufferLayout::offset_locations` to simplify buffer layout
construction when using instancing.
---------
Co-authored-by: Nicola Papale <nicopap@users.noreply.github.com>
Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
# Objective
Adding more features to `AsBindGroup` proc macro means making the trait
arguments uglier. Downstream implementors of the trait without the proc
macro might want to do different things than our default arguments.
## Solution
Make `AsBindGroup` take an associated `Param` type.
## Migration Guide
`AsBindGroup` now allows the user to specify a `SystemParam` to be used
for creating bind groups.
# Objective
- Remove the `wgpu_trace` feature while still making it easy/possible to
record wgpu traces for debugging.
- Close#14725.
- Get a taste of the bevy codebase. :P
## Solution
This PR performs the above objective by removing the `wgpu_trace`
feature from all `Cargo.toml` files.
However, wgpu traces are still useful for debugging - but to record
them, you need to pass in a directory path to store the traces in. To
avoid forcing users into manually creating the renderer,
`bevy_render::settings::WgpuSettings` now has a `trace_path` field, so
that all of Bevy's automatic initialization can happen while still
allowing for tracing.
## Testing
- Did you test these changes? If so, how?
- I have tested these changes, but only via running `cargo run -p ci`. I
am hoping the Github Actions workflows will catch anything I missed.
- Are there any parts that need more testing?
- I do not believe so.
- How can other people (reviewers) test your changes? Is there anything
specific they need to know?
- If you want to test these changes, I have updated the debugging guide
(`docs/debugging.md`) section on WGPU Tracing.
- If relevant, what platforms did you test these changes on, and are
there any important ones you can't test?
- I ran the above command on a Windows 10 64-bit (x64) machine, using
the `stable-x86_64-pc-windows-msvc` toolchain. I do not have anything
set up for other platforms or targets (though I can't imagine this needs
testing on other platforms).
---
## Migration Guide
1. The `bevy/wgpu_trace`, `bevy_render/wgpu_trace`, and
`bevy_internal/wgpu_trace` features no longer exist. Remove them from
your `Cargo.toml`, CI, tooling, and what-not.
2. Follow the instructions in the updated `docs/debugging.md` file in
the repository, under the WGPU Tracing section.
Because of the changes made, you can now generate traces to any path,
rather than the hardcoded `%WorkspaceRoot%/wgpu_trace` (where
`%WorkspaceRoot%` is... the root of your crate's workspace) folder.
(If WGPU hasn't restored tracing functionality...) Do note that WGPU has
not yet restored tracing functionality. However, once it does, the above
should be sufficient to generate new traces.
---------
Co-authored-by: TrialDragon <31419708+TrialDragon@users.noreply.github.com>
# Objective
Rewrite screenshotting to be able to accept any `RenderTarget`.
Closes#12478
## Solution
Previously, screenshotting relied on setting a variety of state on the
requested window. When extracted, the window's `swap_chain_texture_view`
property would be swapped out with a texture_view created that frame for
the screenshot pipeline to write back to the cpu.
Besides being tightly coupled to window in a way that prevented
screenshotting other render targets, this approach had the drawback of
relying on the implicit state of `swap_chain_texture_view` being
returned from a `NormalizedRenderTarget` when view targets were
prepared. Because property is set every frame for windows, that wasn't a
problem, but poses a problem for render target images. Namely, to do the
equivalent trick, we'd have to replace the `GpuImage`'s texture view,
and somehow restore it later.
As such, this PR creates a new `prepare_view_textures` system which runs
before `prepare_view_targets` that allows a new `prepare_screenshots`
system to be sandwiched between and overwrite the render targets texture
view if a screenshot has been requested that frame for the given target.
Additionally, screenshotting itself has been changed to use a component
+ observer pattern. We now spawn a `Screenshot` component into the
world, whose lifetime is tracked with a series of marker components.
When the screenshot is read back to the CPU, we send the image over a
channel back to the main world where an observer fires on the screenshot
entity before being despawned the next frame. This allows the user to
access resources in their save callback that might be useful (e.g.
uploading the screenshot over the network, etc.).
## Testing
![image](https://github.com/user-attachments/assets/48f19aed-d9e1-4058-bb17-82b37f992b7b)
TODO:
- [x] Web
- [ ] Manual texture view
---
## Showcase
render to texture example:
<img
src="https://github.com/user-attachments/assets/612ac47b-8a24-4287-a745-3051837963b0"
width=200/>
web saving still works:
<img
src="https://github.com/user-attachments/assets/e2a15b17-1ff5-4006-ab2a-e5cc74888b9c"
width=200/>
## Migration Guide
`ScreenshotManager` has been removed. To take a screenshot, spawn a
`Screenshot` entity with the specified render target and provide an
observer targeting the `ScreenshotCaptured` event. See the
`window/screenshot` example to see an example.
---------
Co-authored-by: Kristoffer Søholm <k.soeholm@gmail.com>
# Objective
There is a tiny seam at the top of the annulus caused by normal
floating-point error in calculating the coordinates. When generating the
last pair of triangles, given `n == i` then `(TAU / n) * i` does not
equal `TAU` exactly.
Fixes https://github.com/komadori/bevy_mod_outline/issues/42
## Solution
This can be fixed by changing the calculation so that `(TAU / n) * (i %
n) == 0.0`, which is equivalent for trigonometric purposes.
## Testing
Added the unit test
`bevy_render::mesh::primitives::dim2::tests::test_annulus`.
Fixes#14825
Edit: After feedback, these are the updated methods:
- `toggle_inherited_visible(&mut self)`
- Toggles between `Visibility::Inherited` and `Visibility::Visible`. If
the value is `Visibility::Hidden`, it remains unaffected.
- `toggle_inherited_hidden(&mut self)`
- Toggles between `Visibility::Inherited` and `Visibility::Hidden`. If
the value is `Visibility::Visible`, it remains unaffected.
- `toggle_visible_hidden(&mut self)`
- Toggles between `Visibility::Visible` and `Visibility::Hidden`. If the
value is `Visibility::Inherited`, it remains unaffected.
---------
Co-authored-by: Chris Russell <8494645+chescock@users.noreply.github.com>
# Objective
`RenderLayers` was marketed as being unlimited in the Bevy 0.14 release
notes, but the most obvious constructor doesn't actually support
unlimited layers.
We should explicitly document this.
## Solution
Add some docs mentioning the limit and pointing the user to `with` or
`from_layers` if they need an arbitrary number of layers.
# Objective
Fixes#14782
## Solution
Enable the lint and fix all upcoming hints (`--fix`). Also tried to
figure out the false-positive (see review comment). Maybe split this PR
up into multiple parts where only the last one enables the lint, so some
can already be merged resulting in less many files touched / less
potential for merge conflicts?
Currently, there are some cases where it might be easier to read the
code with the qualifier, so perhaps remove the import of it and adapt
its cases? In the current stage it's just a plain adoption of the
suggestions in order to have a base to discuss.
## Testing
`cargo clippy` and `cargo run -p ci` are happy.
# Objective
currently if we use an image with the wrong sampler type in a material,
wgpu panics with an invalid texture format. turn this into a warning and
fail more gracefully.
## Solution
the expected sampler type is specified in the AsBindGroup derive, so we
can just check the image sampler is what it should be.
i am not totally sure about the mapping of image sampler type to
#[sampler(type)], i assumed:
```
"filtering" => [ TextureSampleType::Float { filterable: true } ],
"non_filtering" => [
TextureSampleType::Float { filterable: false },
TextureSampleType::Sint,
TextureSampleType::Uint,
],
"comparison" => [ TextureSampleType::Depth ],
```
This reverts commit e37bf18e2b, added in
#14784.
# Objective
The PR was fine, but the work was very poorly motivated and the
resulting API is not actually very nice. The actual user need is likely
better addressed by #14825.
## Solution
Revert the offending PR.
# Objective
Fixes#14521.
## Solution
Added to methods to the VIsibility.
```rs
is_visible() -> Result<bool, String>
```
and
```rs
visbility_from_bool(bool) -> Visibility
```
## Testing
Ran
* `cargo run -p ci -- lints`
* `cargo run -p ci -- test`
* `cargo run -p ci -- compile`
it seems to be working.
However I got few error messages :`ERROR bevy_log: could not set global
logger and tracing subscriber as they are already set. Consider
disabling LogPlugin` in `cargo run -p ci -- test`, even though all test
passed. I'm not sure if that's expected behaviour
Ps. I'm new to contributing, please correct me if anything is wrong
# Objective
`MeshVertexAttributeId` is currently a wrapper type around a `usize`.
Application developers are exposed to the `usize` whenever they need to
define a new custom vertex attribute, which requires them to generate a
random `usize` ID to avoid clashes with any other custom vertex
attributes in the same application. As the range of a `usize` is
platform dependent, developers on 64-bit machines may inadvertently
generate random values which will fail to compile for a 32-bit target.
The use of a `usize` here encourages non-portable behaviour and should
be replaced with a fixed width type.
## Solution
In this PR I have changed the ID type from `usize` to `u64`, but equally
a `u32` could be used at the risk of breaking some extant non-portable
programs and increasing the chance of an ID collision.
# Objective
- Add "Available on crate feature <image format> only." for docs of
image format related types/functions
- Add warning "WARN bevy_render::texture::image: feature "<image
format>" is not enabled" on load attempt
- Fixes#13468 .
## Solution
- Added #[cfg(feature = "<image format>")] for types and warn!("feature
\"<image format>\" is not enabled"); for ImageFormat enum conversions
## Testing
ran reproducing example from issue #13468 and saw in logs
`WARN bevy_render::texture::image: feature "exr" is not enabled`
generated docs with command `RUSTDOCFLAGS="-Zunstable-options
--cfg=docsrs" cargo +nightly doc --workspace --all-features --no-deps
--document-private-items --open` and saw
![image](https://github.com/bevyengine/bevy/assets/17225606/820262bb-b4e6-4a5e-a306-bddbe9c40852)
that docs contain `Available on crate feature <image format> only.`
marks
![image](https://github.com/bevyengine/bevy/assets/17225606/57463440-a2ea-435f-a2c2-50d34f7f55a9)
## Migration Guide
Image format related entities are feature gated, if there are
compilation errors about unknown names there are some of features in
list (`exr`, `hdr`, `basis-universal`, `png`, `dds`, `tga`, `jpeg`,
`bmp`, `ktx2`, `webp` and `pnm`) should be added.
# Objective
`World::clear_entities` is ambiguous with all of the other systems in
`RenderSet::Cleanup` because it access `&mut World`.
## Solution
I've added another system set variant, and made sure that this runs
after everything else.
## Testing
The `ambiguity_detection` example
## Migration Guide
`World::clear_entities` is now part of `RenderSet::PostCleanup` rather
than `RenderSet::Cleanup`. Your cleanup systems should likely stay in
`RenderSet::Cleanup`.
## Additional context
Spotted when working on #7386: this was responsible for a large number
of ambiguities.
This should be removed if / when #14449 is merged: there's no need to
call `clear_entities` at all if the rendering world is retained!
# Objective
- Fixes#14697
## Solution
This PR modifies the existing `all_tuples!` macro to optionally accept a
`#[doc(fake_variadic)]` attribute in its input. If the attribute is
present, each invocation of the impl macro gets the correct attributes
(i.e. the first impl receives `#[doc(fake_variadic)]` while the other
impls are hidden using `#[doc(hidden)]`.
Impls for the empty tuple (unit type) are left untouched (that's what
the [standard
library](https://doc.rust-lang.org/std/cmp/trait.PartialEq.html#impl-PartialEq-for-())
and
[serde](https://docs.rs/serde/latest/serde/trait.Serialize.html#impl-Serialize-for-())
do).
To work around https://github.com/rust-lang/cargo/issues/8811 and to get
impls on re-exports to correctly show up as variadic, `--cfg docsrs_dep`
is passed when building the docs for the toplevel `bevy` crate.
`#[doc(fake_variadic)]` only works on tuples and fn pointers, so impls
for structs like `AnyOf<(T1, T2, ..., Tn)>` are unchanged.
## Testing
I built the docs locally using `RUSTDOCFLAGS='--cfg docsrs'
RUSTFLAGS='--cfg docsrs_dep' cargo +nightly doc --no-deps --workspace`
and checked the documentation page of a trait both in its original crate
and the re-exported version in `bevy`.
The description should correctly mention for how many tuple items the
trait is implemented.
I added `rustc-args` for docs.rs to the `bevy` crate, I hope there
aren't any other notable crates that re-export `#[doc(fake_variadic)]`
traits.
---
## Showcase
`bevy_ecs::query::QueryData`:
<img width="1015" alt="Screenshot 2024-08-12 at 16 41 28"
src="https://github.com/user-attachments/assets/d40136ed-6731-475f-91a0-9df255cd24e3">
`bevy::ecs::query::QueryData` (re-export):
<img width="1005" alt="Screenshot 2024-08-12 at 16 42 57"
src="https://github.com/user-attachments/assets/71d44cf0-0ab0-48b0-9a51-5ce332594e12">
## Original Description
<details>
Resolves#14697
Submitting as a draft for now, very WIP.
Unfortunately, the docs don't show the variadics nicely when looking at
reexported items.
For example:
`bevy_ecs::bundle::Bundle` correctly shows the variadic impl:
![image](https://github.com/user-attachments/assets/90bf8af1-1d1f-4714-9143-cdd3d0199998)
while `bevy::ecs::bundle::Bundle` (the reexport) shows all the impls
(not good):
![image](https://github.com/user-attachments/assets/439c428e-f712-465b-bec2-481f7bf5870b)
Built using `RUSTDOCFLAGS='--cfg docsrs' cargo +nightly doc --workspace
--no-deps` (`--no-deps` because of wgpu-core).
Maybe I missed something or this is a limitation in the *totally not
private* `#[doc(fake_variadic)]` thingy. In any case I desperately need
some sleep now :))
</details>
Upgrading to WGPU 22.
Needs `naga_oil` to upgrade first, I've got a fork that compiles but
fails tests, so until that's fixed and the crate is officially
updated/released this will be blocked.
---------
Co-authored-by: Elabajaba <Elabajaba@users.noreply.github.com>
# Objective
Fixes#14365
## Migration Guide
- When using the iterator returned by `Mesh::attributes` or
`Mesh::attributes_mut` the first value of the tuple is not the
`MeshVertexAttribute` instead of `MeshVertexAttributeId`. To access the
`MeshVertexAttributeId` use the `MeshVertexAttribute.id` field.
Signed-off-by: Sarthak Singh <sarthak.singh99@gmail.com>
# Objective
- Add custom images as cursors
- Fixes#9557
## Solution
- Change cursor type to accommodate both native and image cursors
- I don't really like this solution because I couldn't use
`Handle<Image>` directly. I would need to import `bevy_assets` and that
causes a circular dependency. Alternatively we could use winit's
`CustomCursor` smart pointers, but that seems hard because the event
loop is needed to create those and is not easily accessable for users.
So now I need to copy around rgba buffers which is sad.
- I use a cache because especially on the web creating cursor images is
really slow
- Sorry to #14196 for yoinking, I just wanted to make a quick solution
for myself and thought that I should probably share it too.
Update:
- Now uses `Handle<Image>`, reads rgba data in `bevy_render` and uses
resources to send the data to `bevy_winit`, where the final cursors are
created.
## Testing
- Added example which works fine at least on Linux Wayland (winit side
has been tested with all platforms).
- I haven't tested if the url cursor works.
## Migration Guide
- `CursorIcon` is no longer a field in `Window`, but a separate
component can be inserted to a window entity. It has been changed to an
enum that can hold custom images in addition to system icons.
- `Cursor` is renamed to `CursorOptions` and `cursor` field of `Window`
is renamed to `cursor_options`
- `CursorIcon` is renamed to `SystemCursorIcon`
---------
Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
Co-authored-by: Jan Hohenheim <jan@hohenheim.ch>
Basically it's https://github.com/bevyengine/bevy/pull/13792 with the
bumped versions of `encase` and `hexasphere`.
---------
Co-authored-by: Robert Swain <robert.swain@gmail.com>
Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
# Objective
Implements #14547
## Solution
Add a function `invert_winding` for `Mesh` that inverts the winding for
`LineList`, `LineStrip`, `TriangleList` and `TriangleStrip`.
## Testing
Tests added
---------
Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
Co-authored-by: Alix Bott <bott.alix@gmail.com>
# Objective
I want to get the visual depth (after view proj matrix stuff) of the
object beneath my cursor.
Even when having a write-back of the depth texture, you would still need
to convert the NDC depth to a logical value.
## Solution
This is done on shader-side by [this
function](e6261b0f5f/crates/bevy_pbr/src/render/view_transformations.wgsl (L151)),
which I ported over to the cpu-side.
I also added `world_to_viewport_with_depth` to get a `Vec3` instead of
`Vec2`.
---
If anyone knows a smarter solution to get the visual depth instead of
going `screen -> viewport ray -> screen`, please let me know :>
# Objective
- Fix#14295
## Solution
- Early out when `GFBD::get_index_and_compare_data` returns None.
## Testing
- Tested on a selection of examples including `many_foxes` and
`3d_shapes`.
- Resolved the original issue in `bevy_vector_shapes`.
# Objective
- Fix issue #2611
## Solution
- Add `--generate-link-to-definition` to all the `rustdoc-args` arrays
in the `Cargo.toml`s (for docs.rs)
- Add `--generate-link-to-definition` to the `RUSTDOCFLAGS` environment
variable in the docs workflow (for dev-docs.bevyengine.org)
- Document all the workspace crates in the docs workflow (needed because
otherwise only the source code of the `bevy` package will be included,
making the argument useless)
- I think this also fixes#3662, since it fixes the bug on
dev-docs.bevyengine.org, while on docs.rs it has been fixed for a while
on their side.
---
## Changelog
- The source code viewer on docs.rs now includes links to the
definitions.
# Objective
- `bevy_render` depends on `image 0.25` but uses `image::ImageReader`
which was added only in `image 0.25.2`
- users that have `image 0.25` in their `Cargo.lock` and update to the
latest `bevy_render` may thus get a compilation due to this (at least I
did)
## Solution
- Properly set the correct minimum version of `image` that `bevy_render`
depends on.
# Objective
Fix a memory leak in `TextureCache` caused by the internal HashMap never
having unused entries cleared.
This isn't a giant memory leak, given the unused entries are simply
empty vectors. Though, if someone goes and resizes a window a bunch, it
can lead to hundreds/thousands of TextureDescriptor keys adding up in
the hashmap – which isn't ideal.
## Solution
- Only retain hashmap entries that still have textures.
- I also added an `is_empty()` method to `TextureCache`, which is useful
for 3rd-party higher-level caches that might have individual caches by
view entity or texture type, for example.
## Testing
- Verified the examples still work (this is a trivial change)
# Objective
- Made `ViewUniform` fields public so that 3rd-parties can create this
uniform. This is useful for custom pipelines that use custom views (e.g.
views buffered by a particular amount, for example).
# Objective
- It's possible to have errors in a draw command, but these errors are
ignored
## Solution
- Return a result with the error
## Changelog
Renamed `RenderCommandResult::Failure` to `RenderCommandResult::Skip`
Added a `reason` string parameter to `RenderCommandResult::Failure`
## Migration Guide
If you were using `RenderCommandResult::Failure` to just ignore an error
and retry later, use `RenderCommandResult::Skip` instead.
This wasn't intentional, but this PR should also help with
https://github.com/bevyengine/bevy/issues/12660 since we can turn a few
unwraps into error messages now.
---------
Co-authored-by: Charlotte McElwain <charlotte.c.mcelwain@gmail.com>
Currently `TextureFormat::Astc` can't be programmatically constructed
without importing wgpu in addition to bevy.
# Objective
Allow programmatic construction of `TextureFormat::Astc` with no
additional imports required.
## Solution
Exported the two component enums `AstcBlock` and `AstcChannel` used in
`TextureFormat::Astc` construction.
## Testing
I did not test this, the change seemed pretty safe. :)
# Objective
- The `RenderTarget` type wasn't being registered, and the `target`
field of `Camera` was marked as ignored, so it wasn't inspectable by
editors.
## Solution
- Remove `#[reflect(ignore)]` from the field
- I've also reordered the `Default` impl of `RenderTarget` because it
looked like it belonged to a different type
Switches `Msaa` from being a globally configured resource to a per
camera view component.
Closes#7194
# Objective
Allow individual views to describe their own MSAA settings. For example,
when rendering to different windows or to different parts of the same
view.
## Solution
Make `Msaa` a component that is required on all camera bundles.
## Testing
Ran a variety of examples to ensure that nothing broke.
TODO:
- [ ] Make sure android still works per previous comment in
`extract_windows`.
---
## Migration Guide
`Msaa` is no longer configured as a global resource, and should be
specified on each spawned camera if a non-default setting is desired.
---------
Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
Co-authored-by: François Mockers <francois.mockers@vleue.com>
# Objective
- The current default viewport crashes bevy due to a wgpu validation
error, this PR fixes that
- Fixes https://github.com/bevyengine/bevy/issues/14355
## Solution
- `Viewport::default()` now returns a 1x1 viewport
## Testing
- I modified the `3d_viewport_to_world` example to use
`Viewport::default()`, and it works as expected (only the top-left pixel
is rendered)
# Objective
- `CameraRenderGraph` is not inspectable via reflection, but should be
(the name of the configured render graph should be visible in editors,
etc.)
## Solution
- Derive and reflect `Debug` for `CameraRenderGraph`
This commit uses the [`offset-allocator`] crate to combine vertex and
index arrays from different meshes into single buffers. Since the
primary source of `wgpu` overhead is from validation and synchronization
when switching buffers, this significantly improves Bevy's rendering
performance on many scenes.
This patch is a more flexible version of #13218, which also used slabs.
Unlike #13218, which used slabs of a fixed size, this commit implements
slabs that start small and can grow. In addition to reducing memory
usage, supporting slab growth reduces the number of vertex and index
buffer switches that need to happen during rendering, leading to
improved performance. To prevent pathological fragmentation behavior,
slabs are capped to a maximum size, and mesh arrays that are too large
get their own dedicated slabs.
As an additional improvement over #13218, this commit allows the
application to customize all allocator heuristics. The
`MeshAllocatorSettings` resource contains values that adjust the minimum
and maximum slab sizes, the cutoff point at which meshes get their own
dedicated slabs, and the rate at which slabs grow. Hopefully-sensible
defaults have been chosen for each value.
Unfortunately, WebGL 2 doesn't support the *base vertex* feature, which
is necessary to pack vertex arrays from different meshes into the same
buffer. `wgpu` represents this restriction as the downlevel flag
`BASE_VERTEX`. This patch detects that bit and ensures that all vertex
buffers get dedicated slabs on that platform. Even on WebGL 2, though,
we can combine all *index* arrays into single buffers to reduce buffer
changes, and we do so.
The following measurements are on Bistro:
Overall frame time improves from 8.74 ms to 5.53 ms (1.58x speedup):
![Screenshot 2024-07-09
163521](https://github.com/bevyengine/bevy/assets/157897/5d83c824-c0ee-434c-bbaf-218ff7212c48)
Render system time improves from 6.57 ms to 3.54 ms (1.86x speedup):
![Screenshot 2024-07-09
163559](https://github.com/bevyengine/bevy/assets/157897/d94e2273-c3a0-496a-9f88-20d394129610)
Opaque pass time improves from 4.64 ms to 2.33 ms (1.99x speedup):
![Screenshot 2024-07-09
163536](https://github.com/bevyengine/bevy/assets/157897/e4ef6e48-d60e-44ae-9a71-b9a731c99d9a)
## Migration Guide
### Changed
* Vertex and index buffers for meshes may now be packed alongside other
buffers, for performance.
* `GpuMesh` has been renamed to `RenderMesh`, to reflect the fact that
it no longer directly stores handles to GPU objects.
* Because meshes no longer have their own vertex and index buffers, the
responsibility for the buffers has moved from `GpuMesh` (now called
`RenderMesh`) to the `MeshAllocator` resource. To access the vertex data
for a mesh, use `MeshAllocator::mesh_vertex_slice`. To access the index
data for a mesh, use `MeshAllocator::mesh_index_slice`.
[`offset-allocator`]: https://github.com/pcwalton/offset-allocator
# Objective
The docs on SpatialBundle's pub const constructors mention that one is
"visible" when it's actually inherited, which afaik means it's
conditional on its parent's visibility.
I feel it's more correct like this.
_Also I'm seeing how making a PR from github.dev works hopefully nothing
weird happens_
# Objective
- Fixes overflow when calling `RenderLayers::iter_layers` on layers of
the form `k * 64 - 1`
- Causes a panic in debug mode, and an infinite iterator in release mode
## Solution
- Use `u64::checked_shr` instead of `>>=`
## Testing
- Added a test case for this: `render_layer_iter_no_overflow`
# Objective
- Bevy currently has lot of invalid intra-doc links, let's fix them!
- Also make CI test them, to avoid future regressions.
- Helps with #1983 (but doesn't fix it, as there could still be explicit
links to docs.rs that are broken)
## Solution
- Make `cargo r -p ci -- doc-check` check fail on warnings (could also
be changed to just some specific lints)
- Manually fix all the warnings (note that in some cases it was unclear
to me what the fix should have been, I'll try to highlight them in a
self-review)
Bump version after release
This PR has been auto-generated
Co-authored-by: Bevy Auto Releaser <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: François Mockers <mockersf@gmail.com>
# Objective
Fix#14146
## Solution
Expansion of #13323 , excluded Adreno 730 and earlier.
## Testing
Tested on android device(Adreno 730) that used to crash