# Objective
Consider the test
```rust
let cell = world.cell();
let _value_a = cell.resource_mut::<A>();
let _value_b = cell.resource_mut::<A>();
```
Currently, this will roughly execute
```rust
// first call
let value = unsafe {
self.world
.get_non_send_unchecked_mut_with_id(component_id)?
};
return Some(WorldBorrowMut::new(value, archetype_component_id, self.access)))
// second call
let value = unsafe {
self.world
.get_non_send_unchecked_mut_with_id(component_id)?
};
return Some(WorldBorrowMut::new(value, archetype_component_id, self.access)))
```
where `WorldBorrowMut::new` will panic if the resource is already borrowed.
This means, that `_value_a` will be created, the access checked (OK), then `value_b` will be created, and the access checked (`panic`).
For a moment, both `_value_a` and `_value_b` existed as `&mut T` to the same location, which is insta-UB as far as I understand it.
## Solution
Flip the order so that `WorldBorrowMut::new` first checks the access, _then_ fetches creates the value. To do that, we pass a `impl FnOnce() -> Mut<T>` instead of the `Mut<T>` directly:
```rust
let get_value = || unsafe {
self.world
.get_non_send_unchecked_mut_with_id(component_id)?
};
return Some(WorldBorrowMut::new(get_value, archetype_component_id, self.access)))
```
# Objective
- shaders defs can now have a `bool` or `int` value
- `#if SHADER_DEF <operator> 3`
- ok if `SHADER_DEF` is defined, has the correct type and pass the comparison
- `==`, `!=`, `>=`, `>`, `<`, `<=` supported
- `#SHADER_DEF` or `#{SHADER_DEF}`
- will be replaced by the value in the shader code
---
## Migration Guide
- replace `shader_defs.push(String::from("NAME"));` by `shader_defs.push("NAME".into());`
- if you used shader def `NO_STORAGE_BUFFERS_SUPPORT`, check how `AVAILABLE_STORAGE_BUFFER_BINDINGS` is now used in Bevy default shaders
# Objective
`add_node_edge` and `add_slot_edge` are fallible methods, but are always used with `.unwrap()`.
`input_node` is often unwrapped as well.
This points to having an infallible behaviour as default, with an alternative fallible variant if needed.
Improves readability and ergonomics.
## Solution
- Change `add_node_edge` and `add_slot_edge` to panic on error.
- Change `input_node` to panic on `None`.
- Add `try_add_node_edge` and `try_add_slot_edge` in case fallible methods are needed.
- Add `get_input_node` to still be able to get an `Option`.
---
## Changelog
### Added
- `try_add_node_edge`
- `try_add_slot_edge`
- `get_input_node`
### Changed
- `add_node_edge` is now infallible (panics on error)
- `add_slot_edge` is now infallible (panics on error)
- `input_node` now panics on `None`
## Migration Guide
Remove `.unwrap()` from `add_node_edge` and `add_slot_edge`.
For cases where the error was handled, use `try_add_node_edge` and `try_add_slot_edge` instead.
Remove `.unwrap()` from `input_node`.
For cases where the option was handled, use `get_input_node` instead.
Co-authored-by: Torstein Grindvik <52322338+torsteingrindvik@users.noreply.github.com>
# Objective
Fixes#4697. Hierarchical propagation of properties, currently only Transform -> GlobalTransform, can be a very expensive operation. Transform propagation is a strict dependency for anything positioned in world-space. In large worlds, this can take quite a bit of time, so limiting it to a single thread can result in poor CPU utilization as it bottlenecks the rest of the frame's systems.
## Solution
- Move transforms without a parent or a child (free-floating (Global)Transform) entities into a separate parallel system.
- Chunk the hierarchy based on the root entities and process it in parallel with `Query::par_for_each_mut`.
- Utilize the hierarchy's specific properties introduced in #4717 to allow for safe use of `Query::get_unchecked` on multiple threads. Assuming each child is unique in the hierarchy, it is impossible to have an aliased `&mut GlobalTransform` so long as we verify that the parent for a child is the same one propagated from.
---
## Changelog
Removed: `transform_propagate_system` is no longer `pub`.
Without this fix, piped systems containing exclusive systems fail to run, giving a runtime panic.
With this PR, running piped systems that contain exclusive systems now works.
## Explanation of the bug
This is because, unless overridden, the default implementation of `run` from the `System` trait simply calls `run_unsafe`. That is not valid for exclusive systems. They must always be called via `run`, as `run_unsafe` takes `&World` instead of `&mut World`.
Trivial reproduction example:
```rust
fn main() {
App::new()
.add_plugins(DefaultPlugins)
.add_system(exclusive.pipe(another))
.run();
}
fn exclusive(_world: &mut World) {}
fn another() {}
```
If you run this, you will get a panic 'Cannot run exclusive systems with a shared World reference' and the backtrace shows how bevy (correctly) tries to call the `run` method (because the system is exclusive), but it is the implementation from the `System` trait (because `PipeSystem` does not have its own), which calls `run_unsafe` (incorrect):
- 3: <bevy_ecs::system::system_piping::PipeSystem<SystemA,SystemB> as bevy_ecs::system::system::System>::run_unsafe
- 4: bevy_ecs::system::system::System::run
# Objective
Allow more use cases where the user may benefit from both `ExtractComponentPlugin` _and_ `UniformComponentPlugin`.
## Solution
Add an associated type to `ExtractComponent` in order to allow specifying the output component (or bundle).
Make `extract_component` return an `Option<_>` such that components can be extracted only when needed.
What problem does this solve?
`ExtractComponentPlugin` allows extracting components, but currently the output type is the same as the input.
This means that use cases such as having a settings struct which turns into a uniform is awkward.
For example we might have:
```rust
struct MyStruct {
enabled: bool,
val: f32
}
struct MyStructUniform {
val: f32
}
```
With the new approach, we can extract `MyStruct` only when it is enabled, and turn it into its related uniform.
This chains well with `UniformComponentPlugin`.
The user may then:
```rust
app.add_plugin(ExtractComponentPlugin::<MyStruct>::default());
app.add_plugin(UniformComponentPlugin::<MyStructUniform>::default());
```
This then saves the user a fair amount of boilerplate.
## Changelog
### Changed
- `ExtractComponent` can specify output type, and outputting is optional.
Co-authored-by: Torstein Grindvik <52322338+torsteingrindvik@users.noreply.github.com>
Add a method to rotate a transform to point towards a direction.
Also updated the docs to link to `forward` and `up` instead of mentioning local negative `Z` and local `Y`.
Unfortunately, links to methods don't work in rust-analyzer :(
Co-authored-by: Devil Ira <justthecooldude@gmail.com>
# Objective
Latest Release, "bevy 0.9" move the FrameCount updater into RenderPlugin, it leads to user who only run app with Core/Minimal Plugin cannot get the right number of FrameCount, it always return 0.
As for use cases like a server app, we don't want to add render dependencies to the app.
More detail in #6656
## Solution
- Move the `update_frame_count` into CorePlugin
# Objective
This add a ctor to `Box` to aid the creation of non-centred boxes. The PR adopts @rezural's work on PR #3322, taking into account the feedback on that PR from @james7132.
## Solution
`Box::from_corners()` creates a `Box` from two opposing corners and automatically determines the min and max extents to ensure that the `Box` is well-formed.
Co-authored-by: rezural <rezural@protonmail.com>
# Objective
- Bevy should be usable to create 'overlay' type apps, where the input is not captured by Bevy, but passed down/into a target app, or to allow passive displays/widgets etc.
## Solution
- the `winit:🪟:Window` already has a `set_cursor_hittest()` which basically does this for mouse input events, so I've exposed it (trying to copy the style laid out in the existing wrappings, and added a simple demo.
---
## Changelog
- Added `hittest` to `WindowAttributes`
- Added the `hittest`'s setters/getters
- Modified the `WindowBuilder`
- Modifed the `WindowDescriptor`'s `Default` impl.
- Added an example `cargo run --example fallthrough`
# Objective
Fixes#4884. `ComponentTicks` stores both added and changed ticks contiguously in the same 8 bytes. This is convenient when passing around both together, but causes half the bytes fetched from memory for the purposes of change detection to effectively go unused. This is inefficient when most queries (no filter, mutating *something*) only write out to the changed ticks.
## Solution
Split the storage for change detection ticks into two separate `Vec`s inside `Column`. Fetch only what is needed during iteration.
This also potentially also removes one blocker from autovectorization of dense queries.
EDIT: This is confirmed to enable autovectorization of dense queries in `for_each` and `par_for_each` where possible. Unfortunately `iter` has other blockers that prevent it.
### TODO
- [x] Microbenchmark
- [x] Check if this allows query iteration to autovectorize simple loops.
- [x] Clean up all of the spurious tuples now littered throughout the API
### Open Questions
- ~~Is `Mut::is_added` absolutely necessary? Can we not just use `Added` or `ChangeTrackers`?~~ It's optimized out if unused.
- ~~Does the fetch of the added ticks get optimized out if not used?~~ Yes it is.
---
## Changelog
Added: `Tick`, a wrapper around a single change detection tick.
Added: `Column::get_added_ticks`
Added: `Column::get_column_ticks`
Added: `SparseSet::get_added_ticks`
Added: `SparseSet::get_column_ticks`
Changed: `Column` now stores added and changed ticks separately internally.
Changed: Most APIs returning `&UnsafeCell<ComponentTicks>` now returns `TickCells` instead, which contains two separate `&UnsafeCell<Tick>` for either component ticks.
Changed: `Query::for_each(_mut)`, `Query::par_for_each(_mut)` will now leverage autovectorization to speed up query iteration where possible.
## Migration Guide
TODO
# Objective
Fix#6453.
## Solution
Use the solution mentioned in the issue by catching the unwind and dropping the error. Wrap the `executor.try_tick` calls with `std::catch::unwind`.
Ideally this would be moved outside of the hot loop, but the mut ref to the `spawned` future is not `UnwindSafe`.
This PR only addresses the bug, we can address the perf issues (should there be any) later.
# Objective
Fix#5149
## Solution
Instead of returning the **total count** of elements in the `QueryIter` in
`size_hint`, we return the **count of remaining elements**. This
Fixes#5149 even when #5148 gets merged.
- https://github.com/bevyengine/bevy/issues/5149
- https://github.com/bevyengine/bevy/pull/5148
---
## Changelog
- Fix partially consumed `QueryIter` and `QueryCombinationIter` having invalid `size_hint`
Co-authored-by: Nicola Papale <nicopap@users.noreply.github.com>
# Objective
Make core pipeline graphic nodes, including `BloomNode`, `FxaaNode`, `TonemappingNode` and `UpscalingNode` public.
This will allow users to construct their own render graphs with these build-in nodes.
## Solution
Make them public.
Also put node names into bevy's core namespace (`core_2d::graph::node`, `core_3d::graph::node`) which makes them consistent.
# Objective
Fix android touch events being flipped. Only removed test for android, don't have ios device to test with. Tested with emulator and physical device.
## Solution
Remove check, no longer needed with coordinate change in 0.9
# Objective
Delete `ImageMode`. It doesn't do anything except mislead people into thinking it controls the aspect ratio of images somehow.
Fixes#3933 and #6637
## Solution
Delete `ImageMode`
## Changelog
Removes the `ImageMode` enum.
Removes the `image_mode` field from `ImageBundle`
Removes the `With<ImageMode>` query filter from `image_node_system`
Renames `image_node_system` to` update_image_calculated_size_system`
# Objective
Fixes#6661
## Solution
Make `SceneSpawner::spawn_dynamic` return `InstanceId` like other functions there.
---
## Changelog
Make `SceneSpawner::spawn_dynamic` return `InstanceId` instead of `()`.
When I was upgrading to 0.9 noticed there were some changes to the timer, mainly the `TimerMode`. When switching from the old `is_repeating()` and `set_repeating()` to the new `mode()` and `set_mode()` noticed the docs still had the old description.
# Objective
BlobVec currently relies on a scratch piece of memory allocated at initialization to make a temporary copy of a component when using `swap_remove_and_{forget/drop}`. This is potentially suboptimal as it writes to a, well-known, but random part of memory instead of using the stack.
## Solution
As the `FIXME` in the file states, replace `swap_scratch` with a call to `swap_nonoverlapping::<u8>`. The swapped last entry is returned as a `OwnedPtr`.
In theory, this should be faster as the temporary swap is allocated on the stack, `swap_nonoverlapping` allows for easier vectorization for bigger types, and the same memory is used between the swap and the returned `OwnedPtr`.
# Objective
* Enable `Res` and `Query` parameter mutual exclusion
* Required for https://github.com/bevyengine/bevy/pull/5080
The `FilteredAccessSet::get_conflicts` methods didn't work properly with
`Res` and `ResMut` parameters. Because those added their access by using
the `combined_access_mut` method and directly modifying the global
access state of the FilteredAccessSet. This caused an inconsistency,
because get_conflicts assumes that ALL added access have a corresponding
`FilteredAccess` added to the `filtered_accesses` field.
In practice, that means that SystemParam that adds their access through
the `Access` returned by `combined_access_mut` and the ones that add
their access using the `add` method lived in two different universes. As
a result, they could never be mutually exclusive.
## Solution
This commit fixes it by removing the `combined_access_mut` method. This
ensures that the `combined_access` field of FilteredAccessSet is always
updated consistently with the addition of a filter. When checking for
filtered access, it is now possible to account for `Res` and `ResMut`
invalid access. This is currently not needed, but might be in the
future.
We add the `add_unfiltered_{read,write}` methods to replace previous
usages of `combined_access_mut`.
We also add improved Debug implementations on FixedBitSet so that their
meaning is much clearer in debug output.
---
## Changelog
* Fix `Res` and `Query` parameter never being mutually exclusive.
## Migration Guide
Note: this mostly changes ECS internals, but since the API is public, it is technically breaking:
* Removed `FilteredAccessSet::combined_access_mut`
* Replace _immutable_ usage of those by `combined_access`
* For _mutable_ usages, use the new `add_unfiltered_{read,write}` methods instead of `combined_access_mut` followed by `add_{read,write}`
# Objective
Make core types in ECS smaller. The column sparse set in Tables is never updated after creation.
## Solution
Create `ImmutableSparseSet` which removes the capacity fields in the backing vec's and the APIs for inserting or removing elements. Drops the size of the sparse set by 3 usizes (24 bytes on 64-bit systems)
## Followup
~~After #4809, Archetype's component SparseSet should be replaced with it.~~ This has been done.
---
## Changelog
Removed: `Table::component_capacity`
## Migration Guide
`Table::component_capacity()` has been removed as Tables do not support adding/removing columns after construction.
Co-authored-by: Carter Anderson <mcanders1@gmail.com>
# Objective
Archetype is a deceptively large type in memory. It stores metadata about which components are in which storage in multiple locations, which is only used when creating new Archetypes while moving entities.
## Solution
Remove the redundant `Box<[ComponentId]>`s and iterate over the sparse set of component metadata instead. Reduces Archetype's size by 4 usizes (32 bytes on 64-bit systems), as well as the additional allocations for holding these slices.
It'd seem like there's a downside that the origin archetype has it's component metadata iterated over twice when creating a new archetype, but this change also removes the extra `Vec<ArchetypeComponentId>` allocations when creating a new archetype which may amortize out to a net gain here. This change likely negatively impacts creating new archetypes with a large number of components, but that's a cost mitigated by the fact that these archetypal relationships are cached in Edges and is incurred only once for each edge created.
## Additional Context
There are several other in-flight PRs that shrink Archetype:
- #4800 merges the entities and rows Vecs together (shaves off 24 bytes per archetype)
- #4809 removes unique_components and moves it to it's own dedicated storage (shaves off 72 bytes per archetype)
---
## Changelog
Changed: `Archetype::table_components` and `Archetype::sparse_set_components` return iterators instead of slices. `Archetype::new` requires iterators instead of parallel slices/vecs.
## Migration Guide
Do I still need to do this? I really hope people were not relying on the public facing APIs changed here.
# Objective
`ComputedVisibility` could afford to be smaller/faster. Optimizing the size and performance of operations on the component will positively benefit almost all extraction systems.
This was listed as one of the potential pieces of future work for #5310.
## Solution
Merge both internal booleans into a single `u8` bitflag field. Rely on bitmasks to evaluate local, hierarchical, and general visibility.
Pros:
- `ComputedVisibility::is_visible` should be a single bitmask test instead of two.
- `ComputedVisibility` is now only 1 byte. Should be able to fit 100% more per cache line when using dense iteration.
Cons:
- Harder to read.
- Setting individual values inside `ComputedVisiblity` require bitmask mutations.
This should be a non-breaking change. No public API was changed. The only publicly visible effect is that `ComputedVisibility` is now 1 byte instead of 2.
# Objective
Fixes#6594
## Solution
- `New` function for `Size` is now a `const` function :)
## Changelog
- `New` function for `Size` is now a `const` function
## Migration Guide
- Nothing has been changed
# Objective
In bevy 0.8 you could list all resources using `world.archetypes().resource().components()`. As far as I can tell the resource archetype has been replaced with the `Resources` storage, and it would be nice if it could be used to iterate over all resource component IDs as well.
## Solution
- add `fn Resources::iter(&self) -> impl Iterator<Item = (ComponentId, &ResourceData)>`
# Objective
- Derive Clone and Debug for `AssetPlugin`
- Make it possible to log asset server settings
- And get an owned instance if wrapping `AssetPlugin` in another plugin. See: 129224ef72/src/web_asset_plugin.rs (L45)
# Objective
> Part of #6573
`Children` was not being properly deserialized in scenes. This was due to a missing registration on `SmallVec<[Entity; 8]>`, which is used by `Children`.
## Solution
Register `SmallVec<[Entity; 8]>`.
---
## Changelog
- Registered `SmallVec<[Entity; 8]>`
# Objective
Fixes#6030, making ``serde`` optional.
## Solution
This was solved by making a ``serialize`` feature that can activate ``serde``, which is now optional.
When ``serialize`` is deactivated, the ``Plugin`` implementation for ``ScenePlugin`` does nothing.
Co-authored-by: Linus Käll <linus.kall.business@gmail.com>
# Objective
Currently, `Ptr` and `PtrMut` can only be constructed via unsafe code. This means that downgrading a reference to an untyped pointer is very cumbersome, despite being a very simple operation.
## Solution
Define conversions for easily and safely constructing untyped pointers. This is the non-owned counterpart to `OwningPtr::make`.
Before:
```rust
let ptr = unsafe { PtrMut::new(NonNull::from(&mut value).cast()) };
```
After:
```rust
let ptr = PtrMut::from(&mut value);
```
Co-authored-by: Carter Anderson <mcanders1@gmail.com>
# Objective
I needed a window which is always on top, to create a overlay app.
## Solution
expose the `always_on_top` property of winit in bevy's `WindowDescriptor` as a boolean flag
---
## Changelog
### Added
- add `WindowDescriptor.always_on_top` which configures a window to stay on top.
# Objective
`ScalingMode::Auto` for cameras only targets min_height and min_width, or as the docs say it `Use minimal possible viewport size while keeping the aspect ratio.`
But there is no ScalingMode that targets max_height and Max_width or `Use maximal possible viewport size while keeping the aspect ratio.`
## Solution
Added `ScalingMode::AutoMax` that does the exact opposite of `ScalingMode::Auto`
---
## Changelog
Renamed `ScalingMode::Auto` to `ScalingMode::AutoMin`.
## Migration Guide
just rename `ScalingMode::Auto` to `ScalingMode::AutoMin` if you are using it.
Co-authored-by: Lixou <82600264+DasLixou@users.noreply.github.com>
Add a method to get the focused window.
Use this instead of `WindowFocused` events in `close_on_esc`.
Seems that the OS/window manager might not always send focused events on application startup.
Sadly, not a fix for #5646.
Co-authored-by: devil-ira <justthecooldude@gmail.com>
# Objective
Fixes #3225, Allow for flippable UI Images
## Solution
Add flip_x and flip_y fields to UiImage, and swap the UV coordinates accordingly in ui_prepare_nodes.
## Changelog
* Changes UiImage to a struct with texture, flip_x, and flip_y fields.
* Adds flip_x and flip_y fields to ExtractedUiNode.
* Changes extract_uinodes to extract the flip_x and flip_y values from UiImage.
* Changes prepare_uinodes to swap the UV coordinates as required.
* Changes UiImage derefs to texture field accesses.
# Objective
Fixes#6615.
`BlobVec` does not respect alignment for zero-sized types, which results in UB whenever a ZST with alignment other than 1 is used in the world.
## Solution
Add the fn `bevy_ptr::dangling_with_align`.
---
## Changelog
+ Added the function `dangling_with_align` to `bevy_ptr`, which creates a well-aligned dangling pointer to a type whose alignment is not known at compile time.
# Objective
- Implements removal of entries from a `dyn Map`
- Fixes#6563
## Solution
- Adds a `remove` method to the `Map` trait which takes in a `&dyn Reflect` key and returns the value removed if it was present.
---
## Changelog
- Added `Map::remove`
## Migration Guide
- Implementors of `Map` will need to implement the `remove` method.
Co-authored-by: radiish <thesethskigamer@gmail.com>
# Objective
The [documentation for `ButtonSettingsError`](https://docs.rs/bevy/0.9.0/bevy/input/gamepad/enum.ButtonSettingsError.html) incorrectly describes the valid range of values as `0.0..=2.0`, probably because it was copied from `AxisSettingsError`. The actual range, as seen in the functions that return it and in its own `thiserror` description, is `0.0..=1.0`.
## Solution
Update the doc comments to reflect the correct range.
Co-authored-by: Sol Toder <ajaxgb@gmail.com>
# Objective
Fix#6548. Most of these methods were already made `const` in #5688. `Entity::to_bits` is the only one that remained.
## Solution
Make it const.
# Objective
- Fix#3606
- Fix#4579
- Fix#3380
## Solution
When running on a Linux machine with some AMD or Intel device, when calling
`surface.get_current_texture()`, ignore `wgpu::SurfaceError::Timeout` errors.
## Alternative
An alternative solution found in the `wgpu` examples is:
```rust
let frame = surface
.get_current_texture()
.or_else(|_| {
render_device.configure_surface(surface, &swap_chain_descriptor);
surface.get_current_texture()
})
.expect("Error reconfiguring surface");
window.swap_chain_texture = Some(TextureView::from(frame));
```
See: <94ce76391b/wgpu/examples/framework.rs (L362-L370)>
Veloren [handles the Timeout error the way this PR proposes to handle it](https://github.com/gfx-rs/wgpu/issues/1218#issuecomment-1092056971).
The reason I went with this PR's solution is that `configure_surface` seems to be quite an expensive operation, and it would run every frame with the wgpu framework solution, despite the fact it works perfectly fine without `configure_surface`.
I know this looks super hacky with the linux-specific line and the AMD check, but my understanding is that the `Timeout` occurrence is specific to a quirk of some AMD drivers on linux, and if otherwise met should be considered a bug.
Co-authored-by: Carter Anderson <mcanders1@gmail.com>
# Objective
- Closes#5262
- Fix color banding caused by quantization.
## Solution
- Adds dithering to the tonemapping node from #3425.
- This is inspired by Godot's default "debanding" shader: https://gist.github.com/belzecue/
- Unlike Godot:
- debanding happens after tonemapping. My understanding is that this is preferred, because we are running the debanding at the last moment before quantization (`[f32, f32, f32, f32]` -> `f32`). This ensures we aren't biasing the dithering strength by applying it in a different (linear) color space.
- This code instead uses and reference the origin source, Valve at GDC 2015
![Screenshot from 2022-11-10 13-44-46](https://user-images.githubusercontent.com/2632925/201218880-70f4cdab-a1ed-44de-a88c-8759e77197f1.png)
![Screenshot from 2022-11-10 13-41-11](https://user-images.githubusercontent.com/2632925/201218883-72393352-b162-41da-88bb-6e54a1e26853.png)
## Additional Notes
Real time rendering to standard dynamic range outputs is limited to 8 bits of depth per color channel. Internally we keep everything in full 32-bit precision (`vec4<f32>`) inside passes and 16-bit between passes until the image is ready to be displayed, at which point the GPU implicitly converts our `vec4<f32>` into a single 32bit value per pixel, with each channel (rgba) getting 8 of those 32 bits.
### The Problem
8 bits of color depth is simply not enough precision to make each step invisible - we only have 256 values per channel! Human vision can perceive steps in luma to about 14 bits of precision. When drawing a very slight gradient, the transition between steps become visible because with a gradient, neighboring pixels will all jump to the next "step" of precision at the same time.
### The Solution
One solution is to simply output in HDR - more bits of color data means the transition between bands will become smaller. However, not everyone has hardware that supports 10+ bit color depth. Additionally, 10 bit color doesn't even fully solve the issue, banding will result in coherent bands on shallow gradients, but the steps will be harder to perceive.
The solution in this PR adds noise to the signal before it is "quantized" or resampled from 32 to 8 bits. Done naively, it's easy to add unneeded noise to the image. To ensure dithering is correct and absolutely minimal, noise is adding *within* one step of the output color depth. When converting from the 32bit to 8bit signal, the value is rounded to the nearest 8 bit value (0 - 255). Banding occurs around the transition from one value to the next, let's say from 50-51. Dithering will never add more than +/-0.5 bits of noise, so the pixels near this transition might round to 50 instead of 51 but will never round more than one step. This means that the output image won't have excess variance:
- in a gradient from 49 to 51, there will be a step between each band at 49, 50, and 51.
- Done correctly, the modified image of this gradient will never have a adjacent pixels more than one step (0-255) from each other.
- I.e. when scanning across the gradient you should expect to see:
```
|-band-| |-band-| |-band-|
Baseline: 49 49 49 50 50 50 51 51 51
Dithered: 49 50 49 50 50 51 50 51 51
Dithered (wrong): 49 50 51 49 50 51 49 51 50
```
![Screenshot from 2022-11-10 14-12-36](https://user-images.githubusercontent.com/2632925/201219075-ab3f46be-d4e9-4869-b66b-a92e1706f49e.png)
![Screenshot from 2022-11-10 14-11-48](https://user-images.githubusercontent.com/2632925/201219079-ec5d2add-817d-487a-8fc1-84569c9cda73.png)
You can see from above how correct dithering "fuzzes" the transition between bands to reduce distinct steps in color, without adding excess noise.
### HDR
The previous section (and this PR) assumes the final output is to an 8-bit texture, however this is not always the case. When Bevy adds HDR support, the dithering code will need to take the per-channel depth into account instead of assuming it to be 0-255. Edit: I talked with Rob about this and it seems like the current solution is okay. We may need to revisit once we have actual HDR final image output.
---
## Changelog
### Added
- All pipelines now support deband dithering. This is enabled by default in 3D, and can be toggled in the `Tonemapping` component in camera bundles. Banding is a graphical artifact created when the rendered image is crunched from high precision (f32 per color channel) down to the final output (u8 per channel in SDR). This results in subtle gradients becoming blocky due to the reduced color precision. Deband dithering applies a small amount of noise to the signal before it is "crunched", which breaks up the hard edges of blocks (bands) of color. Note that this does not add excess noise to the image, as the amount of noise is less than a single step of a color channel - just enough to break up the transition between color blocks in a gradient.
Co-authored-by: Carter Anderson <mcanders1@gmail.com>
# Objective
- Make the many foxes not unnecessarily bright. Broken since #5666.
- Fixes#6528
## Solution
- In #5666 normalisation of normals was moved from the fragment stage to the vertex stage. However, it was not added to the vertex stage for skinned normals. The many foxes are skinned and their skinned normals were not unit normals. which made them brighter. Normalising the skinned normals fixes this.
---
## Changelog
- Fixed: Non-unit length skinned normals are now normalized.
This reverts commit 8429b6d6ca as discussed in #6522.
I tested that the game_menu example works as it should.
Co-authored-by: devil-ira <justthecooldude@gmail.com>
# Objective
Attempting to build `bevy_tasks` produces the following error:
```
error[E0599]: no method named `is_finished` found for struct `async_executor::Task` in the current scope
--> /[...]]/bevy/crates/bevy_tasks/src/task.rs:51:16
|
51 | self.0.is_finished()
| ^^^^^^^^^^^ method not found in `async_executor::Task<T>`
```
It looks like this was introduced along with `Task::is_finished`, which delegates to `async_task::Task::is_finished`. However, the latter was only introduced in `async-task` 4.2.0; `bevy_tasks` does not explicitly depend on `async-task` but on `async-executor` ^1.3.0, which in turn depends on `async-task` ^4.0.0.
## Solution
Add an explicit dependency on `async-task` ^4.2.0.
# Objective
Copy `send_event` and friends from `World` to `WorldCell`.
Clean up `bevy_winit` using `WorldCell::send_event`.
## Changelog
Added `send_event`, `send_event_default`, and `send_event_batch` to `WorldCell`.
Co-authored-by: devil-ira <justthecooldude@gmail.com>
# Objective
- it would be useful to inspect these structs using reflection
## Solution
- derive and register reflect
- Note that `#[reflect(Component)]` requires `Default` (or `FromWorld`) until #6060, so I implemented `Default` for `Tonemapping` with `is_enabled: false`
# Objective
The `load_internal_asset` macro is helpful when creating rendering plugins, but it doesn't support load binary assets (like those compiled as spir-v).
## Solution
Add a `load_internal_binary_asset` macro that use `include_bytes!`.
# Objective
Fixes#5393
## Solution
- Add padding to `GlobalsUniform` / `Globals` to make it 16-byte aligned.
Still not super clear on whether this is a `naga` thing or an `encase` thing or what. But now that we're offering `globals` up to users and #5393 is not just breaking an example, maybe we should do this sort of workaround?
* Move the despawn debug log from `World::despawn` to `EntityMut::despawn`.
* Move the despawn non-existent warning log from `Commands::despawn` to `World::despawn`.
This should make logging consistent regardless of which of the three `despawn` methods is used.
Co-authored-by: devil-ira <justthecooldude@gmail.com>
# Objective
Using `Reflect` we can easily switch between a specific reflection trait object, such as a `dyn Struct`, to a `dyn Reflect` object via `Reflect::as_reflect` or `Reflect::as_reflect_mut`.
```rust
fn do_something(value: &dyn Reflect) {/* ... */}
let foo: Box<dyn Struct> = Box::new(Foo::default());
do_something(foo.as_reflect());
```
However, there is no way to convert a _boxed_ reflection trait object to a `Box<dyn Reflect>`.
## Solution
Add a `Reflect::into_reflect` method which allows converting a boxed reflection trait object back into a boxed `Reflect` trait object.
```rust
fn do_something(value: Box<dyn Reflect>) {/* ... */}
let foo: Box<dyn Struct> = Box::new(Foo::default());
do_something(foo.into_reflect());
```
---
## Changelog
- Added `Reflect::into_reflect`
# Objective
Some render plugins, like [bevy-hikari](https://github.com/cryscan/bevy-hikari) require to set `CameraRenderGraph`. In order to switch between render graphs I need to insert a new `CameraRenderGraph` component. It's not very ergonomic.
## Solution
Add `CameraRenderGraph::set` like in [Name](https://docs.rs/bevy/latest/bevy/core/struct.Name.html).
---
## Changelog
### Added
- `CameraRenderGraph::set`.
# Objective
There is no way to gen an owned value of `Reflect`.
## Solution
Add it! This was originally a part of #6421, but @MrGVSV asked me to create a separate for it to implement reflect diffing.
---
## Changelog
### Added
- `Reflect::reflect_owned` to get an owned version of `Reflect`.
# Objective
`NodeBundle` contains an `image` field, which can be misleading, because if you do supply an image there, nothing will be shown to screen. You need to use an `ImageBundle` instead.
## Solution
* `image` (`UiImage`) field is removed from `NodeBundle`,
* extraction stage queries now make an optional query for `UiImage`, if one is not found, use the image handle that is used as a default by `UiImage`: c019a60b39/crates/bevy_ui/src/ui_node.rs (L464)
* touching up docs for `NodeBundle` to help guide what `NodeBundle` should be used for
* renamed `entity.rs` to `node_bundle.rs` as that gives more of a hint regarding the module's purpose
* separating `camera_config` stuff from the pre-made UI node bundles so that `node_bundle.rs` makes more sense as a module name.
# Objective
- adding a new `.register` should not overwrite old type data
- separate crates should both be able to register the same type
I ran into this while debugging why `register::<Handle<T>>` removed the `ReflectHandle` type data from a prior `register_asset_reflect`.
## Solution
- make `register` do nothing if called again for the same type
- I also removed some unnecessary duplicate registrations
# Objective
When an error causes `debug_checked_unreachable` to be called, the panic message unhelpfully points to the function definition instead of the place that caused the error.
## Solution
Add the `#[track_caller]` attribute in debug mode.
# Objective
Fixes: https://github.com/bevyengine/bevy/issues/6466
Summary: The UI Scaling example dynamically scales the UI which will dynamically allocate fonts to the font atlas surpassing the protective limit, throwing a panic.
## Solution
- Set TextSettings.allow_dynamic_font_size = true for the UI Scaling example. This is the ideal solution since the dynamic changes to the UI are not continuous yet still discrete.
- Update the panic text to reflect ui scaling as a potential cause
# Objective
Bevy UI (and third party plugins) currently have no good way to position themselves after all post processing effects. They currently use the tonemapping node, but this is not adequate if there is anything after tonemapping (such as FXAA).
## Solution
Add a logical `END_MAIN_PASS_POST_PROCESSING` RenderGraph node that main pass post processing effects position themselves before, and things like UIs can position themselves after.
# Objective
This PR fixes#5789, by enabling movable (and scalable) directional light shadow volumes.
## Solution
This PR changes `ExtractedDirectionalLight` to hold a copy of the `DirectionalLight` entity's `GlobalTransform`, instead of just a `direction` vector. This allows the shadow map volume (as defined by the light's `shadow_projection` field) to be transformed honoring translation _and_ scale transforms, and not just rotation.
It also augments the texel size calculation (used to determine the `shadow_normal_bias`) so that it now takes into account the upper bound of the x/y/z scale of the `GlobalTransform`.
This change makes the directional light extraction code more consistent with point and spot lights (that already use `transform`), and allows easily moving and scaling the shadow volume along with a player entity based on camera distance/angle, immediately enabling more real world use cases until we have a more sophisticated adaptive implementation, such as the one described in #3629.
**Note:** While it was previously possible to update the projection achieving a similar effect, depending on the light direction and distance to the origin, the fact that the shadow map camera was always positioned at the origin with a hardcoded `Vec3::Y` up value meant you would get sub-optimal or inconsistent/incorrect results.
---
## Changelog
### Changed
- `DirectionalLight` shadow volumes now honor translation and scale transforms
## Migration Guide
- If your directional lights were positioned at the origin and not scaled (the default, most common scenario) no changes are needed on your part; it just works as before;
- If you previously had a system for dynamically updating directional light shadow projections, you might now be able to simplify your code by updating the directional light entity's transform instead;
- In the unlikely scenario that a scene with directional lights that previously rendered shadows correctly has missing shadows, make sure your directional lights are positioned at (0, 0, 0) and are not scaled to a size that's too large or too small.
Co-authored-by: JMS55 <47158642+JMS55@users.noreply.github.com>
Co-authored-by: Carter Anderson <mcanders1@gmail.com>
Co-authored-by: DGriffin91 <github@dgdigital.net>
`EntityMut::remove_children` does not call `self.update_location()` which is unsound.
Verified by adding the following assertion, which fails when running the tests.
```rust
let before = self.location();
self.update_location();
assert_eq!(before, self.location());
```
I also removed incorrect messages like "parent entity is not modified" and the unhelpful "Inserting a bundle in the children entities may change the parent entity's location if they were of the same archetype" which might lead people to think that's the *only* thing that can change the entity's location.
# Changelog
Added `EntityMut::world_scope`.
Co-authored-by: devil-ira <justthecooldude@gmail.com>
Allow passing `Vec`s of glam vector types as vertex attributes.
Alternative to #4548 and #2719
Also used some macros to cut down on all the repetition.
# Migration Guide
Implementations of `From<Vec<[u16; 4]>>` and `From<Vec<[u8; 4]>>` for `VertexAttributeValues` have been removed.
I you're passing either `Vec<[u16; 4]>` or `Vec<[u8; 4]>` into `Mesh::insert_attribute` it will now require wrapping it with right the `VertexAttributeValues` enum variant.
Co-authored-by: devil-ira <justthecooldude@gmail.com>
# Objective
Closes#5934
Currently it is not possible to de/serialize data to non-self-describing formats using reflection.
## Solution
Add support for non-self-describing de/serialization using reflection.
This allows us to use binary formatters, like [`postcard`](https://crates.io/crates/postcard):
```rust
#[derive(Reflect, FromReflect, Debug, PartialEq)]
struct Foo {
data: String
}
let mut registry = TypeRegistry::new();
registry.register::<Foo>();
let input = Foo {
data: "Hello world!".to_string()
};
// === Serialize! === //
let serializer = ReflectSerializer::new(&input, ®istry);
let bytes: Vec<u8> = postcard::to_allocvec(&serializer).unwrap();
println!("{:?}", bytes); // Output: [129, 217, 61, 98, ...]
// === Deserialize! === //
let deserializer = UntypedReflectDeserializer::new(®istry);
let dynamic_output = deserializer
.deserialize(&mut postcard::Deserializer::from_bytes(&bytes))
.unwrap();
let output = <Foo as FromReflect>::from_reflect(dynamic_output.as_ref()).unwrap();
assert_eq!(expected, output); // OK!
```
#### Crates Tested
- ~~[`rmp-serde`](https://crates.io/crates/rmp-serde)~~ Apparently, this _is_ self-describing
- ~~[`bincode` v2.0.0-rc.1](https://crates.io/crates/bincode/2.0.0-rc.1) (using [this PR](https://github.com/bincode-org/bincode/pull/586))~~ This actually works for the latest release (v1.3.3) of [`bincode`](https://crates.io/crates/bincode) as well. You just need to be sure to use fixed-int encoding.
- [`postcard`](https://crates.io/crates/postcard)
## Future Work
Ideally, we would refactor the `serde` module, but I don't think I'll do that in this PR so as to keep the diff relatively small (and to avoid any painful rebases). This should probably be done once this is merged, though.
Some areas we could improve with a refactor:
* Split deserialization logic across multiple files
* Consolidate helper functions/structs
* Make the logic more DRY
---
## Changelog
- Add support for non-self-describing de/serialization using reflection.
Co-authored-by: Gino Valente <49806985+MrGVSV@users.noreply.github.com>
# Objective
- Adds a bloom pass for HDR-enabled Camera3ds.
- Supersedes (and all credit due to!) https://github.com/bevyengine/bevy/pull/3430 and https://github.com/bevyengine/bevy/pull/2876
![image](https://user-images.githubusercontent.com/47158642/198698783-228edc00-20b5-4218-a613-331ccd474f38.png)
## Solution
- A threshold is applied to isolate emissive samples, and then a series of downscale and upscaling passes are applied and composited together.
- Bloom is applied to 2d or 3d Cameras with hdr: true and a BloomSettings component.
---
## Changelog
- Added a `core_pipeline::bloom::BloomSettings` component.
- Added `BloomNode` that runs between the main pass and tonemapping.
- Added a `BloomPlugin` that is loaded as part of CorePipelinePlugin.
- Added a bloom example project.
Co-authored-by: JMS55 <47158642+JMS55@users.noreply.github.com>
Co-authored-by: Carter Anderson <mcanders1@gmail.com>
Co-authored-by: DGriffin91 <github@dgdigital.net>
# Objective
Alternative to #6424Fixes#6226
Fixes spawning empty bundles
## Solution
Add `BundleComponentStatus` trait and implement it for `AddBundle` and a new `SpawnBundleStatus` type (which always returns an Added status). `write_components` is now generic on `BundleComponentStatus` instead of taking `AddBundle` directly. This means BundleSpawner can now avoid needing AddBundle from the Empty archetype, which means BundleSpawner no longer needs a reference to the original archetype.
In theory this cuts down on the work done in `write_components` when spawning, but I'm seeing no change in the spawn benchmarks.
# Objective
The UI pass in HDR breaks currently because the color attachment format does not match the HDR ViewTarget.
## Solution
Specialize the UI pipeline on "hdr-ness" and select the appropriate format (like we do in the other built in pipelines).
# Objective
- Fixes#4019
- Fix lighting of double-sided materials when using a negative scale
- The FlightHelmet.gltf model's hose uses a double-sided material. Loading the model with a uniform scale of -1.0, and comparing against Blender, it was identified that negating the world-space tangent, bitangent, and interpolated normal produces incorrect lighting. Discussion with Morten Mikkelsen clarified that this is both incorrect and unnecessary.
## Solution
- Remove the code that negates the T, B, and N vectors (the interpolated world-space tangent, calculated world-space bitangent, and interpolated world-space normal) when seeing the back face of a double-sided material with negative scale.
- Negate the world normal for a double-sided back face only when not using normal mapping
### Before, on `main`, flipping T, B, and N
<img width="932" alt="Screenshot 2022-08-22 at 15 11 53" src="https://user-images.githubusercontent.com/302146/185965366-f776ff2c-cfa1-46d1-9c84-fdcb399c273c.png">
### After, on this PR
<img width="932" alt="Screenshot 2022-08-22 at 15 12 11" src="https://user-images.githubusercontent.com/302146/185965420-8be493e2-3b1a-4188-bd13-fd6b17a76fe7.png">
### Double-sided material without normal maps
https://user-images.githubusercontent.com/302146/185988113-44a384e7-0b55-4946-9b99-20f8c803ab7e.mp4
---
## Changelog
- Fixed: Lighting of normal-mapped, double-sided materials applied to models with negative scale
- Fixed: Lighting and shadowing of back faces with no normal-mapping and a double-sided material
## Migration Guide
`prepare_normal` from the `bevy_pbr::pbr_functions` shader import has been reworked.
Before:
```rust
pbr_input.world_normal = in.world_normal;
pbr_input.N = prepare_normal(
pbr_input.material.flags,
in.world_normal,
#ifdef VERTEX_TANGENTS
#ifdef STANDARDMATERIAL_NORMAL_MAP
in.world_tangent,
#endif
#endif
in.uv,
in.is_front,
);
```
After:
```rust
pbr_input.world_normal = prepare_world_normal(
in.world_normal,
(material.flags & STANDARD_MATERIAL_FLAGS_DOUBLE_SIDED_BIT) != 0u,
in.is_front,
);
pbr_input.N = apply_normal_mapping(
pbr_input.material.flags,
pbr_input.world_normal,
#ifdef VERTEX_TANGENTS
#ifdef STANDARDMATERIAL_NORMAL_MAP
in.world_tangent,
#endif
#endif
in.uv,
);
```
# Objective
Replace `WorldQueryGats` trait with actual gats
## Solution
Replace `WorldQueryGats` trait with actual gats
---
## Changelog
- Replaced `WorldQueryGats` trait with actual gats
## Migration Guide
- Replace usage of `WorldQueryGats` assoc types with the actual gats on `WorldQuery` trait
Respect mipmap_filter when create ImageDescriptor with linear()/nearest()
# Objective
Fixes#6348
## Migration Guide
This PR changes default `ImageSettings` and may lead to unexpected behaviour for existing projects with mipmapped textures. Users should provide custom `ImageSettings` resource with `mipmap_filter=FilterMode::Nearest` if they want to keep old behaviour.
Co-authored-by: Yakov Borevich <j.borevich@gmail.com>
# Objective
Currently we are limiting the amount of direction lights in a scene to one.
## Solution
Increase the amount of direction lights from 1 to 10.
This still is not a perfect solution, but should unblock many use cases.
We could probably just store the directional lights similar to the point lights in an storage buffer, allowing for an variable amount of directional lights.
Co-authored-by: Kurt Kühnert <51823519+Ku95@users.noreply.github.com>
# Objective
Right now, the `TaskPool` implementation allows panics to permanently kill worker threads upon panicking. This is currently non-recoverable without using a `std::panic::catch_unwind` in every scheduled task. This is poor ergonomics and even poorer developer experience. This is exacerbated by #2250 as these threads are global and cannot be replaced after initialization.
Removes the need for temporary fixes like #4998. Fixes#4996. Fixes#6081. Fixes#5285. Fixes#5054. Supersedes #2307.
## Solution
The current solution is to wrap `Executor::run` in `TaskPool` with a `catch_unwind`, and discarding the potential panic. This was taken straight from [smol](404c7bcc0a/src/spawn.rs (L44))'s current implementation. ~~However, this is not entirely ideal as:~~
- ~~the signaled to the awaiting task. We would need to change `Task<T>` to use `async_task::FallibleTask` internally, and even then it doesn't signal *why* it panicked, just that it did.~~ (See below).
- ~~no error is logged of any kind~~ (See below)
- ~~it's unclear if it drops other tasks in the executor~~ (it does not)
- ~~This allows the ECS parallel executor to keep chugging even though a system's task has been dropped. This inevitably leads to deadlock in the executor.~~ Assuming we don't catch the unwind in ParallelExecutor, this will naturally kill the main thread.
### Alternatives
A final solution likely will incorporate elements of any or all of the following.
#### ~~Log and Ignore~~
~~Log the panic, drop the task, keep chugging. This only addresses the discoverability of the panic. The process will continue to run, probably deadlocking the executor. tokio's detatched tasks operate in this fashion.~~
Panics already do this by default, even when caught by `catch_unwind`.
#### ~~`catch_unwind` in `ParallelExecutor`~~
~~Add another layer catching system-level panics into the `ParallelExecutor`. How the executor continues when a core dependency of many systems fails to run is up for debate.~~
`async_task::Task` bubbles up panics already, this will transitively push panics all the way to the main thread.
#### ~~Emulate/Copy `tokio::JoinHandle` with `Task<T>`~~
~~`tokio::JoinHandle<T>` bubbles up the panic from the underlying task when awaited. This can be transitively applied across other APIs that also use `Task<T>` like `Query::par_for_each` and `TaskPool::scope`, bubbling up the panic until it's either caught or it reaches the main thread.~~
`async_task::Task` bubbles up panics already, this will transitively push panics all the way to the main thread.
#### Abort on Panic
The nuclear option. Log the error, abort the entire process on any thread in the task pool panicking. Definitely avoids any additional infrastructure for passing the panic around, and might actually lead to more efficient code as any unwinding is optimized out. However gives the developer zero options for dealing with the issue, a seemingly poor choice for debuggability, and prevents graceful shutdown of the process. Potentially an option for handling very low-level task management (a la #4740). Roughly takes the shape of:
```rust
struct AbortOnPanic;
impl Drop for AbortOnPanic {
fn drop(&mut self) {
abort!();
}
}
let guard = AbortOnPanic;
// Run task
std::mem::forget(AbortOnPanic);
```
---
## Changelog
Changed: `bevy_tasks::TaskPool`'s threads will no longer terminate permanently when a task scheduled onto them panics.
Changed: `bevy_tasks::Task` and`bevy_tasks::Scope` will propagate panics in the spawned tasks/scopes to the parent thread.
# Objective
Add consistent UI rendering and interaction where deep nodes inside two different hierarchies will never render on top of one-another by default and offer an escape hatch (z-index) for nodes to change their depth.
## The problem with current implementation
The current implementation of UI rendering is broken in that regard, mainly because [it sets the Z value of the `Transform` component based on a "global Z" space](https://github.com/bevyengine/bevy/blob/main/crates/bevy_ui/src/update.rs#L43) shared by all nodes in the UI. This doesn't account for the fact that each node's final `GlobalTransform` value will be relative to its parent. This effectively makes the depth unpredictable when two deep trees are rendered on top of one-another.
At the moment, it's also up to each part of the UI code to sort all of the UI nodes. The solution that's offered here does the full sorting of UI node entities once and offers the result through a resource so that all systems can use it.
## Solution
### New ZIndex component
This adds a new optional `ZIndex` enum component for nodes which offers two mechanism:
- `ZIndex::Local(i32)`: Overrides the depth of the node relative to its siblings.
- `ZIndex::Global(i32)`: Overrides the depth of the node relative to the UI root. This basically allows any node in the tree to "escape" the parent and be ordered relative to the entire UI.
Note that in the current implementation, omitting `ZIndex` on a node has the same result as adding `ZIndex::Local(0)`. Additionally, the "global" stacking context is essentially a way to add your node to the root stacking context, so using `ZIndex::Local(n)` on a root node (one without parent) will share that space with all nodes using `Index::Global(n)`.
### New UiStack resource
This adds a new `UiStack` resource which is calculated from both hierarchy and `ZIndex` during UI update and contains a vector of all node entities in the UI, ordered by depth (from farthest from camera to closest). This is exposed publicly by the bevy_ui crate with the hope that it can be used for consistent ordering and to reduce the amount of sorting that needs to be done by UI systems (i.e. instead of sorting everything by `global_transform.z` in every system, this array can be iterated over).
### New z_index example
This also adds a new z_index example that showcases the new `ZIndex` component. It's also a good general demo of the new UI stack system, because making this kind of UI was very broken with the old system (e.g. nodes would render on top of each other, not respecting hierarchy or insert order at all).
![image](https://user-images.githubusercontent.com/1060971/189015985-8ea8f989-0e9d-4601-a7e0-4a27a43a53f9.png)
---
## Changelog
- Added the `ZIndex` component to bevy_ui.
- Added the `UiStack` resource to bevy_ui, and added implementation in a new `stack.rs` module.
- Removed the previous Z updating system from bevy_ui, because it was replaced with the above.
- Changed bevy_ui rendering to use UiStack instead of z ordering.
- Changed bevy_ui focus/interaction system to use UiStack instead of z ordering.
- Added a new z_index example.
## ZIndex demo
Here's a demo I wrote to test these features
https://user-images.githubusercontent.com/1060971/188329295-d7beebd6-9aee-43ab-821e-d437df5dbe8a.mp4
Co-authored-by: Carter Anderson <mcanders1@gmail.com>
This reverts commit 53d387f340.
# Objective
Reverts #6448. This didn't have the intended effect: we're now getting bevy::prelude shown in the docs again.
Co-authored-by: Alejandro Pascual <alejandro.pascual.pozo@gmail.com>
# Objective
- Right now re-exports are completely hidden in prelude docs.
- Fixes#6433
## Solution
- We could show the re-exports without inlining their documentation.
# Objective
Fixes#6059, changing all incorrect occurrences of ``id`` in the ``entity`` module to ``index``:
* struct level documentation,
* ``id`` struct field,
* ``id`` method and its documentation.
## Solution
Renaming and verifying using CI.
Co-authored-by: Edvin Kjell <43633999+Edwox@users.noreply.github.com>
# Objective
In some scenarios it can be useful to check if a task has been finished without polling it. I added a function called `is_finished` to check if a task has been finished.
## Solution
Since `async_task` supports it out of the box, it is just a simple wrapper function.
---
# Objective
- Add post processing passes for FXAA (Fast Approximate Anti-Aliasing)
- Add example comparing MSAA and FXAA
## Solution
When the FXAA plugin is added, passes for FXAA are inserted between the main pass and the tonemapping pass. Supports using either HDR or LDR output from the main pass.
---
## Changelog
- Add a new FXAANode that runs after the main pass when the FXAA plugin is added.
Co-authored-by: Carter Anderson <mcanders1@gmail.com>
For `derive(WorldQuery)`, there are three structs generated, `Item`, `Fetch` and `State`.
These inherit the visibility of the derived structure, thus `#![warn(missing_docs)]` would
warn about missing documentation for these structures.
- [ ] I'd like some advice on what to write here, as I personally don't really understand `Fetch` nor `State`.
# Objective
Currently, `bevy_dynamic_plugin` simply panics on error. This makes it impossible to handle failures in applications that use this feature.
For example, I'd like to build an optional expansion for my game, that may not be distributed to all users. I want to use `bevy_dynamic_plugin` for loading it. I want my game to try to load it on startup, but continue without it if it cannot be loaded.
## Solution
- Make the `dynamically_load_plugin` function return a `Result`, so it can gracefully return loading errors.
- Create an error enum type, to provide useful information about the kind of error. This adds `thiserror` to the dependencies of `bevy_dynamic_plugin`, but that dependency is already used in other parts of bevy (such as `bevy_asset`), so not a big deal.
I chose not to change the behavior of the builder method in the App extension trait. I kept it as panicking. There is no clean way (that I'm aware of) to make a builder-style API that has fallible methods. So it is either a panic or a warning. I feel the panic is more appropriate.
---
## Changelog
### Changed
- `bevy_dynamic_plugin::dynamically_load_plugin` now returns `Result` instead of panicking, to allow for error handling