mirror of
https://github.com/bevyengine/bevy
synced 2024-11-10 07:04:33 +00:00
315 commits
Author | SHA1 | Message | Date | |
---|---|---|---|---|
Rich Churcher
|
cdc18ee886
|
Move UI example to testbed (#16241)
# Objective UI example is quite extensive, probably not the best teaching example anymore. Closes #16230. |
||
ickshonpe
|
6d3965f520
|
Overflow clip margin (#15561)
# Objective Limited implementation of the CSS property `overflow-clip-margin` https://developer.mozilla.org/en-US/docs/Web/CSS/overflow-clip-margin Allows you to control the visible area for clipped content when using overfllow-clip, -hidden, or -scroll and expand it with a margin. Based on #15442 Fixes #15468 ## Solution Adds a new field to Style: `overflow_clip_margin: OverflowClipMargin`. The field is ignored unless overflow-clip, -hidden or -scroll is set on at least one axis. `OverflowClipMargin` has these associated constructor functions: ``` pub const fn content_box() -> Self; pub const fn padding_box() -> Self; pub const fn border_box() -> Self; ``` You can also use the method `with_margin` to increases the size of the visible area: ``` commands .spawn(NodeBundle { style: Style { width: Val::Px(100.), height: Val::Px(100.), padding: UiRect::all(Val::Px(20.)), border: UiRect::all(Val::Px(5.)), overflow: Overflow::clip(), overflow_clip_margin: OverflowClipMargin::border_box().with_margin(25.), ..Default::default() }, border_color: Color::BLACK.into(), background_color: GRAY.into(), ..Default::default() }) ``` `with_margin` expects a length in logical pixels, negative values are clamped to zero. ## Notes * To keep this PR as simple as possible I omitted responsive margin values support. This could be added in a follow up if we want it. * CSS also supports a `margin-box` option but we don't have access to the margin values in `Node` so it's probably not feasible to implement atm. ## Testing ```cargo run --example overflow_clip_margin``` <img width="396" alt="overflow-clip-margin" src="https://github.com/user-attachments/assets/07b51cd6-a565-4451-87a0-fa079429b04b"> ## Migration Guide Style has a new field `OverflowClipMargin`. It allows users to set the visible area for clipped content when using overflow-clip, -hidden, or -scroll and expand it with a margin. There are three associated constructor functions `content_box`, `padding_box` and `border_box`: * `content_box`: elements painted outside of the content box area (the innermost part of the node excluding the padding and border) of the node are clipped. This is the new default behaviour. * `padding_box`: elements painted outside outside of the padding area of the node are clipped. * `border_box`: elements painted outside of the bounds of the node are clipped. This matches the behaviour from Bevy 0.14. There is also a `with_margin` method that increases the size of the visible area by the given number in logical pixels, negative margin values are clamped to zero. `OverflowClipMargin` is ignored unless overflow-clip, -hidden or -scroll is also set on at least one axis of the UI node. --------- Co-authored-by: UkoeHB <37489173+UkoeHB@users.noreply.github.com> |
||
Shane Celis
|
5157fef84b
|
Add window drag move and drag resize without decoration example. (#15814)
# Objective Add an example for the new drag move and drag resize introduced by PR #15674 and fix #15734. ## Solution I created an example that allows the user to exercise drag move and drag resize separately. The user can also choose what direction the resize works in. ![Screenshot 2024-10-10 at 4 06 43 AM](https://github.com/user-attachments/assets/1da558ab-a80f-49af-8b7d-bb635b0f038f) ### Name The example is called `window_drag_move`. Happy to have that bikeshedded. ### Contentious Refactor? This PR removed the `ResizeDirection` enumeration in favor of using `CompassOctant` which had the same variants. Perhaps this is contentious. ### Unsafe? In PR #15674 I mentioned that `start_drag_move()` and `start_drag_resize()`'s requirement to only be called in the presence of a left-click looks like a compiler-unenforceable contract that can cause intermittent panics when not observed, so perhaps the functions should be marked them unsafe. **I have not made that change** here since I didn't see a clear consensus on that. ## Testing I exercised this on x86 macOS. However, winit for macOS does not support drag resize. It reports a good error when `start_drag_resize()` is called. I'd like to see it tested on Windows and Linux. --- ## Showcase Example window_drag_move shows how to drag or resize a window without decoration. --------- Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com> |
||
Joona Aalto
|
0e30b68b20
|
Add mesh picking backend and MeshRayCast system parameter (#15800)
# Objective Closes #15545. `bevy_picking` supports UI and sprite picking, but not mesh picking. Being able to pick meshes would be extremely useful for various games, tools, and our own examples, as well as scene editors and inspectors. So, we need a mesh picking backend! Luckily, [`bevy_mod_picking`](https://github.com/aevyrie/bevy_mod_picking) (which `bevy_picking` is based on) by @aevyrie already has a [backend for it]( |
||
Matty
|
e563f86a1d
|
Simplified easing curves (#15711)
# Objective Simplify the API surrounding easing curves. Broaden the base of types that support easing. ## Solution There is now a single library function, `easing_curve`, which constructs a unit-parametrized easing curve between two values based on an `EaseFunction`: ```rust /// Given a `start` and `end` value, create a curve parametrized over [the unit interval] /// that connects them, using the given [ease function] to determine the form of the /// curve in between. /// /// [the unit interval]: Interval::UNIT /// [ease function]: EaseFunction pub fn easing_curve<T: Ease>(start: T, end: T, ease_fn: EaseFunction) -> EasingCurve<T> { //... } ``` As this shows, the type of the output curve is generic only in `T`. In particular, as long as `T` is `Reflect` (and `FromReflect` etc. — i.e., a standard "well-behaved" reflectable type), `EasingCurve<T>` is also `Reflect`, and there is no special field handling nonsense. Therefore, `EasingCurve` is the kind of thing that would be able to be easily changed in an editor. This is made possible by storing the actual `EaseFunction` on `EasingCurve<T>` instead of indirecting through some kind of function type (which generally leads to issues with reflection). The types that can be eased are those that implement a trait `Ease`: ```rust /// A type whose values can be eased between. /// /// This requires the construction of an interpolation curve that actually extends /// beyond the curve segment that connects two values, because an easing curve may /// extrapolate before the starting value and after the ending value. This is /// especially common in easing functions that mimic elastic or springlike behavior. pub trait Ease: Sized { /// Given `start` and `end` values, produce a curve with [unlimited domain] /// that: /// - takes a value equivalent to `start` at `t = 0` /// - takes a value equivalent to `end` at `t = 1` /// - has constant speed everywhere, including outside of `[0, 1]` /// /// [unlimited domain]: Interval::EVERYWHERE fn interpolating_curve_unbounded(start: &Self, end: &Self) -> impl Curve<Self>; } ``` (I know, I know, yet *another* interpolation trait. See 'Future direction'.) The other existing easing functions from the previous version of this module have also become new members of `EaseFunction`: `Linear`, `Steps`, and `Elastic` (which maybe needs a different name). The latter two are parametrized. ## Testing Tested using the `easing_functions` example. I also axed the `cubic_curve` example which was of questionable value and replaced it with `eased_motion`, which uses this API in the context of animation: https://github.com/user-attachments/assets/3c802992-6b9b-4b56-aeb1-a47501c29ce2 --- ## Future direction Morally speaking, `Ease` is incredibly similar to `StableInterpolate`. Probably, we should just merge `StableInterpolate` into `Ease`, and then make `SmoothNudge` an automatic extension trait of `Ease`. The reason I didn't do that is that `StableInterpolate` is not implemented for `VectorSpace` because of concerns about the `Color` types, and I wanted to avoid controversy. I think that may be a good idea though. As Alice mentioned before, we should also probably get rid of the `interpolation` dependency. The parametrized `Elastic` variant probably also needs some additional work (e.g. renaming, in/out/in-out variants, etc.) if we want to keep it. |
||
ickshonpe
|
99b9a2fcd7
|
box shadow (#15204)
# Objective UI box shadow support Adds a new component `BoxShadow`: ```rust pub struct BoxShadow { /// The shadow's color pub color: Color, /// Horizontal offset pub x_offset: Val, /// Vertical offset pub y_offset: Val, /// Horizontal difference in size from the occluding uninode pub spread_radius: Val, /// Blurriness of the shadow pub blur_radius: Val, } ``` To use `BoxShadow`, add the component to any Bevy UI node and a shadow will be drawn beneath that node. Also adds a resource `BoxShadowSamples` that can be used to adjust the shadow quality. #### Notes * I'm not super happy with the field names. Maybe we need a `struct Size { width: Val, height: Val }` type or something. * The shader isn't very optimised but I don't see that it's too important for now as the number of shadows being rendered is not going to be massive most of the time. I think it's more important to get the API and geometry correct with this PR. * I didn't implement an inset property, it's not essential and can easily be added in a follow up. * Shadows are only rendered for uinodes, not for images or text. * Batching isn't supported, it would need out-of-the-scope-of-this-pr changes to the way the UI handles z-ordering for it to be effective. # Showcase ```cargo run --example box_shadow -- --samples 4``` <img width="391" alt="br" src="https://github.com/user-attachments/assets/4e8add96-dc93-46e0-9e35-d995eb0943ad"> ```cargo run --example box_shadow -- --samples 10``` <img width="391" alt="s10" src="https://github.com/user-attachments/assets/ecb384c9-4012-4cd6-9dea-5180904bf28e"> |
||
IceSentry
|
4bf647ff3b
|
Add Order Independent Transparency (#14876)
# Objective - Alpha blending can easily fail in many situations and requires sorting on the cpu ## Solution - Implement order independent transparency (OIT) as an alternative to alpha blending - The implementation uses 2 passes - The first pass records all the fragments colors and position to a buffer that is the size of N layers * the render target resolution. - The second pass sorts the fragments, blends them and draws them to the screen. It also currently does manual depth testing because early-z fails in too many cases in the first pass. ## Testing - We've been using this implementation at foresight in production for many months now and we haven't had any issues related to OIT. --- ## Showcase ![image](https://github.com/user-attachments/assets/157f3e32-adaf-4782-b25b-c10313b9bc43) ![image](https://github.com/user-attachments/assets/bef23258-0c22-4b67-a0b8-48a9f571c44f) ## Future work - Add an example showing how to use OIT for a custom material - Next step would be to implement a per-pixel linked list to reduce memory use - I'd also like to investigate using a BinnedRenderPhase instead of a SortedRenderPhase. If it works, it would make the transparent pass significantly faster. --------- Co-authored-by: Kristoffer Søholm <k.soeholm@gmail.com> Co-authored-by: JMS55 <47158642+JMS55@users.noreply.github.com> Co-authored-by: Charlotte McElwain <charlotte.c.mcelwain@gmail.com> |
||
Mohamed Osama
|
91bed8ce51
|
Screen shake example (#15567)
# Objective Closes https://github.com/bevyengine/bevy/issues/15564 ## Solution Adds a screen shake example. --------- Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com> |
||
François Mockers
|
01387101df
|
add example for ease functions (#15703)
# Objective - Followup to #15675 - Add an example showcasing the functions ## Solution - Add an example showcasing the functions - Some of the functions from the interpolation crate are messed up, fixed in #15706 ![ease](https://github.com/user-attachments/assets/1f3b2b80-23d2-45c7-8b08-95b2e870aa02) --------- Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com> Co-authored-by: Joona Aalto <jondolf.dev@gmail.com> |
||
Ida "Iyes
|
31409ebc61
|
Add Image methods for easy access to a pixel's color (#10392)
# Objective If you want to draw / generate images from the CPU, such as: - to create procedurally-generated assets - for games whose artstyle is best implemented by poking pixels directly from the CPU, instead of using shaders It is currently very unergonomic to do in Bevy, because you have to deal with the raw bytes inside `image.data`, take care of the pixel format, etc. ## Solution This PR adds some helper methods to `Image` for pixel manipulation. These methods allow you to use Bevy's user-friendly `Color` struct to read and write the colors of pixels, at arbitrary coordinates (specified as `UVec3` to support any texture dimension). They handle encoding/decoding to the `Image`s `TextureFormat`, incl. any sRGB conversion. While we are at it, also add methods to help with direct access to the raw bytes. It is now easy to compute the offset where the bytes of a specific pixel coordinate are found, or to just get a Rust slice to access them. Caveat: `Color` roundtrips are obviously going to be lossy for non-float `TextureFormat`s. Using `set_color_at` followed by `get_color_at` will return a different value, due to the data conversions involved (such as `f32` -> `u8` -> `f32` for the common `Rgba8UnormSrgb` texture format). Be careful when comparing colors (such as checking for a color you wrote before)! Also adding a new example: `cpu_draw` (under `2d`), to showcase these new APIs. --- ## Changelog ### Added - `Image` APIs for easy access to the colors of specific pixels. --------- Co-authored-by: Pascal Hertleif <killercup@gmail.com> Co-authored-by: François <mockersf@gmail.com> Co-authored-by: ltdk <usr@ltdk.xyz> |
||
poopy
|
d9190e4ff6
|
Add Support for Triggering Events via AnimationEvent s (#15538)
# Objective Add support for events that can be triggered from animation clips. This is useful when you need something to happen at a specific time in an animation. For example, playing a sound every time a characters feet hits the ground when walking. Closes #15494 ## Solution Added a new field to `AnimationClip`: `events`, which contains a list of `AnimationEvent`s. These are automatically triggered in `animate_targets` and `trigger_untargeted_animation_events`. ## Testing Added a couple of tests and example (`animation_events.rs`) to make sure events are triggered when expected. --- ## Showcase `Events` need to also implement `AnimationEvent` and `Reflect` to be used with animations. ```rust #[derive(Event, AnimationEvent, Reflect)] struct SomeEvent; ``` Events can be added to an `AnimationClip` by specifying a time and event. ```rust // trigger an event after 1.0 second animation_clip.add_event(1.0, SomeEvent); ``` And optionally, providing a target id. ```rust let id = AnimationTargetId::from_iter(["shoulder", "arm", "hand"]); animation_clip.add_event_to_target(id, 1.0, HandEvent); ``` I modified the `animated_fox` example to show off the feature. ![CleanShot 2024-10-05 at 02 41 57](https://github.com/user-attachments/assets/0bb47db7-24f9-4504-88f1-40e375b89b1b) --------- Co-authored-by: Matty <weatherleymatthew@gmail.com> Co-authored-by: Chris Biscardi <chris@christopherbiscardi.com> Co-authored-by: François Mockers <francois.mockers@vleue.com> |
||
Eero Lehtinen
|
d0edbdac78
|
Fix cargo-ndk build command (#15648)
# Objective - Fix cargo-ndk build command documentation in readme. ```sh ❯ cargo ndk -t arm64-v8a build -o android_example/app/src/main/jniLibs Building arm64-v8a (aarch64-linux-android) error: unexpected argument '-o' found ``` ## Solution - Move "build" to the end of the command. ## Testing - With the new command order building works. ```sh ❯ cargo ndk -t arm64-v8a -o android_example/app/src/main/jniLibs build Building arm64-v8a (aarch64-linux-android) Compiling bevy_ptr v0.15.0-dev (/home/eero/repos/bevy/crates/bevy_ptr) Compiling bevy_macro_utils v0.15.0-dev (/home/eero/repos/bevy/crates/bevy_macro_utils) Compiling event-listener v5.3.1 ... rest of compilation ... ``` |
||
Viktor Gustavsson
|
f86ee32576
|
Add UI GhostNode (#15341)
# Objective - Fixes #14826 - For context, see #15238 ## Solution Add a `GhostNode` component to `bevy_ui` and update all the relevant systems to use it to traverse for UI children. - [x] `ghost_hierarchy` module - [x] Add `GhostNode` - [x] Add `UiRootNodes` system param for iterating (ghost-aware) UI root nodes - [x] Add `UiChildren` system param for iterating (ghost-aware) UI children - [x] Update `layout::ui_layout_system` - [x] Use ghost-aware root nodes for camera updates - [x] Update and remove children in taffy - [x] Initial spawn - [x] Detect changes on nested UI children - [x] Use ghost-aware children traversal in `update_uinode_geometry_recursive` - [x] Update the rest of the UI systems to use the ghost hierarchy - [x] `stack::ui_stack_system` - [x] `update::` - [x] `update_clipping_system` - [x] `update_target_camera_system` - [x] `accessibility::calc_name` ## Testing - [x] Added a new example `ghost_nodes` that can be used as a testbed. - [x] Added unit tests for _some_ of the traversal utilities in `ghost_hierarchy` - [x] Ensure this fulfills the needs for currently known use cases - [x] Reactivity libraries (test with `bevy_reactor`) - [ ] Text spans (mentioned by koe [on discord](https://discord.com/channels/691052431525675048/1285371432460881991/1285377442998915246)) --- ## Performance [See comment below](https://github.com/bevyengine/bevy/pull/15341#issuecomment-2385456820) ## Migration guide Any code that previously relied on `Parent`/`Children` to iterate UI children may now want to use `bevy_ui::UiChildren` to ensure ghost nodes are skipped, and their first descendant Nodes included. UI root nodes may now be children of ghost nodes, which means `Without<Parent>` might not query all root nodes. Use `bevy_ui::UiRootNodes` where needed to iterate root nodes instead. ## Potential future work - Benchmarking/optimizations of hierarchies containing lots of ghost nodes - Further exploration of UI hierarchies and markers for root nodes/leaf nodes to create better ergonomics for things like `UiLayer` (world-space ui) --------- Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com> Co-authored-by: UkoeHB <37489173+UkoeHB@users.noreply.github.com> |
||
Litttle_fish
|
e924df0e1a
|
Add features to switch NativeActivity and GameActivity usage (#12095)
# Objective Add two features to switch bevy to use `NativeActivity` or `GameActivity` on Android, use `GameActivity` by default. Also close #12058 and probably #12026 . ## Solution Add two features to the corresponding crates so you can toggle it, like what `winit` and `android-activity` crate did. --- ## Changelog Removed default `NativeActivity` feature implementation for Android, added two new features to enable `NativeActivity` and `GameActivity`, and use `GameActivity` by default. ## Migration Guide Because `cargo-apk` is not compatible with `GameActivity`, building/running using `cargo apk build/run -p bevy_mobile_example` is no longer possible. Users should follow the new workflow described in document. --------- Co-authored-by: François Mockers <francois.mockers@vleue.com> Co-authored-by: BD103 <59022059+BD103@users.noreply.github.com> Co-authored-by: Rich Churcher <rich.churcher@gmail.com> |
||
m-edlund
|
c323db02e0
|
Add sub_camera_view , enabling sheared projection (#15537)
# Objective - This PR fixes #12488 ## Solution - This PR adds a new property to `Camera` that emulates the functionality of the [setViewOffset()](https://threejs.org/docs/#api/en/cameras/PerspectiveCamera.setViewOffset) API in three.js. - When set, the perspective and orthographic projections will restrict the visible area of the camera to a part of the view frustum defined by `offset` and `size`. ## Testing - In the new `camera_sub_view` example, a fixed, moving and control sub view is created for both perspective and orthographic projection - Run the example with `cargo run --example camera_sub_view` - The code can be tested by adding a `SubCameraView` to a camera --- ## Showcase ![image](https://github.com/user-attachments/assets/75ac45fc-d75d-4664-8ef6-ff7865297c25) - Left Half: Perspective Projection - Right Half: Orthographic Projection - Small boxes in order: - Sub view of the left half of the full image - Sub view moving from the top left to the bottom right of the full image - Sub view of the full image (acting as a control) - Large box: No sub view <details> <summary>Shortened camera setup of `camera_sub_view` example</summary> ```rust // Main perspective Camera commands.spawn(Camera3dBundle { transform, ..default() }); // Perspective camera left half commands.spawn(Camera3dBundle { camera: Camera { sub_camera_view: Some(SubCameraView { // Set the sub view camera to the left half of the full image full_size: uvec2(500, 500), offset: ivec2(0, 0), size: uvec2(250, 500), }), order: 1, ..default() }, transform, ..default() }); // Perspective camera moving commands.spawn(( Camera3dBundle { camera: Camera { sub_camera_view: Some(SubCameraView { // Set the sub view camera to a fifth of the full view and // move it in another system full_size: uvec2(500, 500), offset: ivec2(0, 0), size: uvec2(100, 100), }), order: 2, ..default() }, transform, ..default() }, MovingCameraMarker, )); // Perspective camera control commands.spawn(Camera3dBundle { camera: Camera { sub_camera_view: Some(SubCameraView { // Set the sub view to the full image, to ensure that it matches // the projection without sub view full_size: uvec2(450, 450), offset: ivec2(0, 0), size: uvec2(450, 450), }), order: 3, ..default() }, transform, ..default() }); // Main orthographic camera commands.spawn(Camera3dBundle { projection: OrthographicProjection { ... } .into(), camera: Camera { order: 4, ..default() }, transform, ..default() }); // Orthographic camera left half commands.spawn(Camera3dBundle { projection: OrthographicProjection { ... } .into(), camera: Camera { sub_camera_view: Some(SubCameraView { // Set the sub view camera to the left half of the full image full_size: uvec2(500, 500), offset: ivec2(0, 0), size: uvec2(250, 500), }), order: 5, ..default() }, transform, ..default() }); // Orthographic camera moving commands.spawn(( Camera3dBundle { projection: OrthographicProjection { ... } .into(), camera: Camera { sub_camera_view: Some(SubCameraView { // Set the sub view camera to a fifth of the full view and // move it in another system full_size: uvec2(500, 500), offset: ivec2(0, 0), size: uvec2(100, 100), }), order: 6, ..default() }, transform, ..default() }, MovingCameraMarker, )); // Orthographic camera control commands.spawn(Camera3dBundle { projection: OrthographicProjection { ... } .into(), camera: Camera { sub_camera_view: Some(SubCameraView { // Set the sub view to the full image, to ensure that it matches // the projection without sub view full_size: uvec2(450, 450), offset: ivec2(0, 0), size: uvec2(450, 450), }), order: 7, ..default() }, transform, ..default() }); ``` </details> |
||
IceSentry
|
120d66482e
|
Clarify purpose of shader_instancing example (#15456)
# Objective - The shader_instancing example can be misleading since it doesn't explain that bevy has built in automatic instancing. ## Solution - Explain that bevy has built in instancing and that this example is for advanced users. - Add a new automatic_instancing example that shows how to use the built in automatic instancing - Rename the shader_instancing example to custom_shader_instancing to highlight that this is a more advanced implementation --------- Co-authored-by: JMS55 <47158642+JMS55@users.noreply.github.com> |
||
Sou1gh0st
|
78a3aae81b
|
feat(gltf): add name component to gltf mesh primitive (#13912)
# Objective - fixes https://github.com/bevyengine/bevy/issues/13473 ## Solution - When a single mesh is assigned multiple materials, it is divided into several primitive nodes, with each primitive assigned a unique material. Presently, these primitives are named using the format Mesh.index, which complicates querying. To improve this, we can assign a specific name to each primitive based on the material’s name, since each primitive corresponds to one material exclusively. ## Testing - I have included a simple example which shows how to query a material and mesh part based on the new name component. ## Changelog - adds `GltfMaterialName` component to the mesh entity of the gltf primitive node. --------- Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com> |
||
MiniaczQ
|
c1486654d7
|
QuerySingle family of system params (#15476)
# Objective Add the following system params: - `QuerySingle<D, F>` - Valid if only one matching entity exists, - `Option<QuerySingle<D, F>>` - Valid if zero or one matching entity exists. As @chescock pointed out, we don't need `Mut` variants. Fixes: #15264 ## Solution Implement the type and both variants of system params. Also implement `ReadOnlySystemParam` for readonly queries. Added a new ECS example `fallible_params` which showcases `SingleQuery` usage. In the future we might want to add `NonEmptyQuery`, `NonEmptyEventReader` and `Res` to it (or maybe just stop at mentioning it). ## Testing Tested with the example. There is a lot of warning spam so we might want to implement #15391. |
||
Matty
|
89e98b208f
|
Initial implementation of the Bevy Remote Protocol (Adopted) (#14880)
# Objective Adopted from #13563. The goal is to implement the Bevy Remote Protocol over HTTP/JSON, allowing the ECS to be interacted with remotely. ## Solution At a high level, there are really two separate things that have been undertaken here: 1. First, `RemotePlugin` has been created, which has the effect of embedding a [JSON-RPC](https://www.jsonrpc.org/specification) endpoint into a Bevy application. 2. Second, the [Bevy Remote Protocol verbs](https://gist.github.com/coreh/1baf6f255d7e86e4be29874d00137d1d#file-bevy-remote-protocol-md) (excluding `POLL`) have been implemented as remote methods for that JSON-RPC endpoint under a Bevy-exclusive namespace (e.g. `bevy/get`, `bevy/list`, etc.). To avoid some repetition, here is the crate-level documentation, which explains the request/response structure, built-in-methods, and custom method configuration: <details> <summary>Click to view crate-level docs</summary> ```rust //! An implementation of the Bevy Remote Protocol over HTTP and JSON, to allow //! for remote control of a Bevy app. //! //! Adding the [`RemotePlugin`] to your [`App`] causes Bevy to accept //! connections over HTTP (by default, on port 15702) while your app is running. //! These *remote clients* can inspect and alter the state of the //! entity-component system. Clients are expected to `POST` JSON requests to the //! root URL; see the `client` example for a trivial example of use. //! //! The Bevy Remote Protocol is based on the JSON-RPC 2.0 protocol. //! //! ## Request objects //! //! A typical client request might look like this: //! //! ```json //! { //! "method": "bevy/get", //! "id": 0, //! "params": { //! "entity": 4294967298, //! "components": [ //! "bevy_transform::components::transform::Transform" //! ] //! } //! } //! ``` //! //! The `id` and `method` fields are required. The `param` field may be omitted //! for certain methods: //! //! * `id` is arbitrary JSON data. The server completely ignores its contents, //! and the client may use it for any purpose. It will be copied via //! serialization and deserialization (so object property order, etc. can't be //! relied upon to be identical) and sent back to the client as part of the //! response. //! //! * `method` is a string that specifies one of the possible [`BrpRequest`] //! variants: `bevy/query`, `bevy/get`, `bevy/insert`, etc. It's case-sensitive. //! //! * `params` is parameter data specific to the request. //! //! For more information, see the documentation for [`BrpRequest`]. //! [`BrpRequest`] is serialized to JSON via `serde`, so [the `serde` //! documentation] may be useful to clarify the correspondence between the Rust //! structure and the JSON format. //! //! ## Response objects //! //! A response from the server to the client might look like this: //! //! ```json //! { //! "jsonrpc": "2.0", //! "id": 0, //! "result": { //! "bevy_transform::components::transform::Transform": { //! "rotation": { "x": 0.0, "y": 0.0, "z": 0.0, "w": 1.0 }, //! "scale": { "x": 1.0, "y": 1.0, "z": 1.0 }, //! "translation": { "x": 0.0, "y": 0.5, "z": 0.0 } //! } //! } //! } //! ``` //! //! The `id` field will always be present. The `result` field will be present if the //! request was successful. Otherwise, an `error` field will replace it. //! //! * `id` is the arbitrary JSON data that was sent as part of the request. It //! will be identical to the `id` data sent during the request, modulo //! serialization and deserialization. If there's an error reading the `id` field, //! it will be `null`. //! //! * `result` will be present if the request succeeded and will contain the response //! specific to the request. //! //! * `error` will be present if the request failed and will contain an error object //! with more information about the cause of failure. //! //! ## Error objects //! //! An error object might look like this: //! //! ```json //! { //! "code": -32602, //! "message": "Missing \"entity\" field" //! } //! ``` //! //! The `code` and `message` fields will always be present. There may also be a `data` field. //! //! * `code` is an integer representing the kind of an error that happened. Error codes documented //! in the [`error_codes`] module. //! //! * `message` is a short, one-sentence human-readable description of the error. //! //! * `data` is an optional field of arbitrary type containing additional information about the error. //! //! ## Built-in methods //! //! The Bevy Remote Protocol includes a number of built-in methods for accessing and modifying data //! in the ECS. Each of these methods uses the `bevy/` prefix, which is a namespace reserved for //! BRP built-in methods. //! //! ### bevy/get //! //! Retrieve the values of one or more components from an entity. //! //! `params`: //! - `entity`: The ID of the entity whose components will be fetched. //! - `components`: An array of fully-qualified type names of components to fetch. //! //! `result`: A map associating each type name to its value on the requested entity. //! //! ### bevy/query //! //! Perform a query over components in the ECS, returning all matching entities and their associated //! component values. //! //! All of the arrays that comprise this request are optional, and when they are not provided, they //! will be treated as if they were empty. //! //! `params`: //! `params`: //! - `data`: //! - `components` (optional): An array of fully-qualified type names of components to fetch. //! - `option` (optional): An array of fully-qualified type names of components to fetch optionally. //! - `has` (optional): An array of fully-qualified type names of components whose presence will be //! reported as boolean values. //! - `filter` (optional): //! - `with` (optional): An array of fully-qualified type names of components that must be present //! on entities in order for them to be included in results. //! - `without` (optional): An array of fully-qualified type names of components that must *not* be //! present on entities in order for them to be included in results. //! //! `result`: An array, each of which is an object containing: //! - `entity`: The ID of a query-matching entity. //! - `components`: A map associating each type name from `components`/`option` to its value on the matching //! entity if the component is present. //! - `has`: A map associating each type name from `has` to a boolean value indicating whether or not the //! entity has that component. If `has` was empty or omitted, this key will be omitted in the response. //! //! ### bevy/spawn //! //! Create a new entity with the provided components and return the resulting entity ID. //! //! `params`: //! - `components`: A map associating each component's fully-qualified type name with its value. //! //! `result`: //! - `entity`: The ID of the newly spawned entity. //! //! ### bevy/destroy //! //! Despawn the entity with the given ID. //! //! `params`: //! - `entity`: The ID of the entity to be despawned. //! //! `result`: null. //! //! ### bevy/remove //! //! Delete one or more components from an entity. //! //! `params`: //! - `entity`: The ID of the entity whose components should be removed. //! - `components`: An array of fully-qualified type names of components to be removed. //! //! `result`: null. //! //! ### bevy/insert //! //! Insert one or more components into an entity. //! //! `params`: //! - `entity`: The ID of the entity to insert components into. //! - `components`: A map associating each component's fully-qualified type name with its value. //! //! `result`: null. //! //! ### bevy/reparent //! //! Assign a new parent to one or more entities. //! //! `params`: //! - `entities`: An array of entity IDs of entities that will be made children of the `parent`. //! - `parent` (optional): The entity ID of the parent to which the child entities will be assigned. //! If excluded, the given entities will be removed from their parents. //! //! `result`: null. //! //! ### bevy/list //! //! List all registered components or all components present on an entity. //! //! When `params` is not provided, this lists all registered components. If `params` is provided, //! this lists only those components present on the provided entity. //! //! `params` (optional): //! - `entity`: The ID of the entity whose components will be listed. //! //! `result`: An array of fully-qualified type names of components. //! //! ## Custom methods //! //! In addition to the provided methods, the Bevy Remote Protocol can be extended to include custom //! methods. This is primarily done during the initialization of [`RemotePlugin`], although the //! methods may also be extended at runtime using the [`RemoteMethods`] resource. //! //! ### Example //! ```ignore //! fn main() { //! App::new() //! .add_plugins(DefaultPlugins) //! .add_plugins( //! // `default` adds all of the built-in methods, while `with_method` extends them //! RemotePlugin::default() //! .with_method("super_user/cool_method".to_owned(), path::to::my:🆒:handler) //! // ... more methods can be added by chaining `with_method` //! ) //! .add_systems( //! // ... standard application setup //! ) //! .run(); //! } //! ``` //! //! The handler is expected to be a system-convertible function which takes optional JSON parameters //! as input and returns a [`BrpResult`]. This means that it should have a type signature which looks //! something like this: //! ``` //! # use serde_json::Value; //! # use bevy_ecs::prelude::{In, World}; //! # use bevy_remote::BrpResult; //! fn handler(In(params): In<Option<Value>>, world: &mut World) -> BrpResult { //! todo!() //! } //! ``` //! //! Arbitrary system parameters can be used in conjunction with the optional `Value` input. The //! handler system will always run with exclusive `World` access. //! //! [the `serde` documentation]: https://serde.rs/ ``` </details> ### Message lifecycle At a high level, the lifecycle of client-server interactions is something like this: 1. The client sends one or more `BrpRequest`s. The deserialized version of that is just the Rust representation of a JSON-RPC request, and it looks like this: ```rust pub struct BrpRequest { /// The action to be performed. Parsing is deferred for the sake of error reporting. pub method: Option<Value>, /// Arbitrary data that will be returned verbatim to the client as part of /// the response. pub id: Option<Value>, /// The parameters, specific to each method. /// /// These are passed as the first argument to the method handler. /// Sometimes params can be omitted. pub params: Option<Value>, } ``` 2. These requests are accumulated in a mailbox resource (small lie but close enough). 3. Each update, the mailbox is drained by a system `process_remote_requests`, where each request is processed according to its `method`, which has an associated handler. Each handler is a Bevy system that runs with exclusive world access and returns a result; e.g.: ```rust pub fn process_remote_get_request(In(params): In<Option<Value>>, world: &World) -> BrpResult { // ... } ``` 4. The result (or an error) is reported back to the client. ## Testing This can be tested by using the `server` and `client` examples. The `client` example is not particularly exhaustive at the moment (it only creates barebones `bevy/query` requests) but is still informative. Other queries can be made using `curl` with the `server` example running. For example, to make a `bevy/list` request and list all registered components: ```bash curl -X POST -d '{ "jsonrpc": "2.0", "id": 1, "method": "bevy/list" }' 127.0.0.1:15702 | jq . ``` --- ## Future direction There were a couple comments on BRP versioning while this was in draft. I agree that BRP versioning is a good idea, but I think that it requires some consensus on a couple fronts: - First of all, what does the version actually mean? Is it a version for the protocol itself or for the `bevy/*` methods implemented using it? Both? - Where does the version actually live? The most natural place is just where we have `"jsonrpc"` right now (at least if it's versioning the protocol itself), but this means we're not actually conforming to JSON-RPC any more (so, for example, any client library used to construct JSON-RPC requests would stop working). I'm not really against that, but it's at least a real decision. - What do we actually do when we encounter mismatched versions? Adding handling for this would be actual scope creep instead of just a little add-on in my opinion. Another thing that would be nice is making the internal structure of the implementation less JSON-specific. Right now, for example, component values that will appear in server responses are quite eagerly converted to JSON `Value`s, which prevents disentangling the handler logic from the communication medium, but it can probably be done in principle and I imagine it would enable more code reuse (e.g. for custom method handlers) in addition to making the internals more readily usable for other formats. --------- Co-authored-by: Patrick Walton <pcwalton@mimiga.net> Co-authored-by: DragonGamesStudios <margos.michal@gmail.com> Co-authored-by: Christopher Biscardi <chris@christopherbiscardi.com> Co-authored-by: Gino Valente <49806985+MrGVSV@users.noreply.github.com> |
||
Piefayth
|
55dddaf72e
|
UI Scrolling (#15291)
# Objective - Fixes #8074 - Adopts / Supersedes #8104 ## Solution Adapted from #8104 and affords the same benefits. **Additions** - [x] Update scrolling on relayout (height of node or contents may have changed) - [x] Make ScrollPosition component optional for ui nodes to avoid checking every node on scroll - [x] Nested scrollviews **Omissions** - Removed input handling for scrolling from `bevy_ui`. Users should update `ScrollPosition` directly. ### Implementation Adds a new `ScrollPosition` component. Updating this component on a `Node` with an overflow axis set to `OverflowAxis::Scroll` will reposition its children by that amount when calculating node transforms. As before, no impact on the underlying Taffy layout. Calculating this correctly is trickier than it was in #8104 due to `"Update scrolling on relayout"`. **Background** When `ScrollPosition` is updated directly by the user, it can be trivially handled in-engine by adding the parent's scroll position to the final location of each child node. However, _other layout actions_ may result in a situation where `ScrollPosition` needs to be updated. Consider a 1000 pixel tall vertically scrolling list of 100 elements, each 100 pixels tall. Scrolled to the bottom, the `ScrollPosition.offset_y` is 9000, just enough to display the last element in the list. When removing an element from that list, the new desired `ScrollPosition.offset_y` is 8900, but, critically, that is not known until after the sizes and positions of the children of the scrollable node are resolved. All user scrolling code today handles this by delaying the resolution by one frame. One notable disadvantage of this is the inability to support `WinitSettings::desktop_app()`, since there would need to be an input AFTER the layout change that caused the scroll position to update for the results of the scroll position update to render visually. I propose the alternative in this PR, which allows for same-frame resolution of scrolling layout. **Resolution** _Edit: Below resolution is outdated, and replaced with the simpler usage of taffy's `Layout::content_size`._ When recursively iterating the children of a node, each child now returns a `Vec2` representing the location of their own bottom right corner. Then, `[[0,0, [x,y]]` represents a bounding box containing the scrollable area filled by that child. Scrollable parents aggregate those areas into the bounding box of _all_ children, then consider that result against `ScrollPosition` to ensure its validity. In the event that resolution of the layout of the children invalidates the `ScrollPosition` (e.g. scrolled further than there were children to scroll to), _all_ children of that node must be recursively repositioned. The position of each child must change as a result of the change in scroll position. Therefore, this implementation takes care to only spend the cost of the "second layout pass" when a specific node actually had a `ScrollPosition` forcibly updated by the layout of its children. ## Testing Examples in `ui/scroll.rs`. There may be more complex node/style interactions that were unconsidered. --- ## Showcase ![scroll](https://github.com/user-attachments/assets/1331138f-93aa-4a8f-959c-6be18a04ff03) ## Alternatives - `bevy_ui` doesn't support scrolling. - `bevy_ui` implements scrolling with a one-frame delay on reactions to layout changes. |
||
Patrick Walton
|
8154164f1b
|
Allow animation clips to animate arbitrary properties. (#15282)
Currently, Bevy restricts animation clips to animating `Transform::translation`, `Transform::rotation`, `Transform::scale`, or `MorphWeights`, which correspond to the properties that glTF can animate. This is insufficient for many use cases such as animating UI, as the UI layout systems expect to have exclusive control over UI elements' `Transform`s and therefore the `Style` properties must be animated instead. This commit fixes this, allowing for `AnimationClip`s to animate arbitrary properties. The `Keyframes` structure has been turned into a low-level trait that can be implemented to achieve arbitrary animation behavior. Along with `Keyframes`, this patch adds a higher-level trait, `AnimatableProperty`, that simplifies the task of animating single interpolable properties. Built-in `Keyframes` implementations exist for translation, rotation, scale, and morph weights. For the most part, you can migrate by simply changing your code from `Keyframes::Translation(...)` to `TranslationKeyframes(...)`, and likewise for rotation, scale, and morph weights. An example `AnimatableProperty` implementation for the font size of a text section follows: #[derive(Reflect)] struct FontSizeProperty; impl AnimatableProperty for FontSizeProperty { type Component = Text; type Property = f32; fn get_mut(component: &mut Self::Component) -> Option<&mut Self::Property> { Some(&mut component.sections.get_mut(0)?.style.font_size) } } In order to keep this patch relatively small, this patch doesn't include an implementation of `AnimatableProperty` on top of the reflection system. That can be a follow-up. This patch builds on top of the new `EntityMutExcept<>` type in order to widen the `AnimationTarget` query to include write access to all components. Because `EntityMutExcept<>` has some performance overhead over an explicit query, we continue to explicitly query `Transform` in order to avoid regressing the performance of skeletal animation, such as the `many_foxes` benchmark. I've measured the performance of that benchmark and have found no significant regressions. A new example, `animated_ui`, has been added. This example shows how to use Bevy's built-in animation infrastructure to animate font size and color, which wasn't possible before this patch. ## Showcase https://github.com/user-attachments/assets/1fa73492-a9ce-405a-a8f2-4aacd7f6dc97 ## Migration Guide * Animation keyframes are now an extensible trait, not an enum. Replace `Keyframes::Translation(...)`, `Keyframes::Scale(...)`, `Keyframes::Rotation(...)`, and `Keyframes::Weights(...)` with `Box::new(TranslationKeyframes(...))`, `Box::new(ScaleKeyframes(...))`, `Box::new(RotationKeyframes(...))`, and `Box::new(MorphWeightsKeyframes(...))` respectively. |
||
Rich Churcher
|
e3b6b125a0
|
Add sprite and mesh alteration examples (#15298)
# Objective Add examples for manipulating sprites and meshes by either mutating the handle or direct manipulation of the asset, as described in #15056. Closes #3130. (The previous PR suffered a Git-tastrophe, and was unceremoniously closed, sry! 😅 ) --------- Co-authored-by: Jan Hohenheim <jan@hohenheim.ch> |
||
Patrick Walton
|
2ae5a21009
|
Implement percentage-closer soft shadows (PCSS). (#13497)
[*Percentage-closer soft shadows*] are a technique from 2004 that allow shadows to become blurrier farther from the objects that cast them. It works by introducing a *blocker search* step that runs before the normal shadow map sampling. The blocker search step detects the difference between the depth of the fragment being rasterized and the depth of the nearby samples in the depth buffer. Larger depth differences result in a larger penumbra and therefore a blurrier shadow. To enable PCSS, fill in the `soft_shadow_size` value in `DirectionalLight`, `PointLight`, or `SpotLight`, as appropriate. This shadow size value represents the size of the light and should be tuned as appropriate for your scene. Higher values result in a wider penumbra (i.e. blurrier shadows). When using PCSS, temporal shadow maps (`ShadowFilteringMethod::Temporal`) are recommended. If you don't use `ShadowFilteringMethod::Temporal` and instead use `ShadowFilteringMethod::Gaussian`, Bevy will use the same technique as `Temporal`, but the result won't vary over time. This produces a rather noisy result. Doing better would likely require downsampling the shadow map, which would be complex and slower (and would require PR #13003 to land first). In addition to PCSS, this commit makes the near Z plane for the shadow map configurable on a per-light basis. Previously, it had been hardcoded to 0.1 meters. This change was necessary to make the point light shadow map in the example look reasonable, as otherwise the shadows appeared far too aliased. A new example, `pcss`, has been added. It demonstrates the percentage-closer soft shadow technique with directional lights, point lights, spot lights, non-temporal operation, and temporal operation. The assets are my original work. Both temporal and non-temporal shadows are rather noisy in the example, and, as mentioned before, this is unavoidable without downsampling the depth buffer, which we can't do yet. Note also that the shadows don't look particularly great for point lights; the example simply isn't an ideal scene for them. Nevertheless, I felt that the benefits of the ability to do a side-by-side comparison of directional and point lights outweighed the unsightliness of the point light shadows in that example, so I kept the point light feature in. Fixes #3631. [*Percentage-closer soft shadows*]: https://developer.download.nvidia.com/shaderlibrary/docs/shadow_PCSS.pdf ## Changelog ### Added * Percentage-closer soft shadows (PCSS) are now supported, allowing shadows to become blurrier as they stretch away from objects. To use them, set the `soft_shadow_size` field in `DirectionalLight`, `PointLight`, or `SpotLight`, as applicable. * The near Z value for shadow maps is now customizable via the `shadow_map_near_z` field in `DirectionalLight`, `PointLight`, and `SpotLight`. ## Screenshots PCSS off: ![Screenshot 2024-05-24 120012](https://github.com/bevyengine/bevy/assets/157897/0d35fe98-245b-44fb-8a43-8d0272a73b86) PCSS on: ![Screenshot 2024-05-24 115959](https://github.com/bevyengine/bevy/assets/157897/83397ef8-1317-49dd-bfb3-f8286d7610cd) --------- Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com> Co-authored-by: Torstein Grindvik <52322338+torsteingrindvik@users.noreply.github.com> |
||
Rich Churcher
|
8e7ef64bb1
|
Split zoom/orbit into separate examples (#15135)
# Objective As previously discussed, split camera zoom and orbiting examples to keep things less cluttered. See discussion on #15092 for context. |
||
Rich Churcher
|
b9b43ad89c
|
Add examples for orthographic and perspective zoom (#15092)
# Objective Add examples for zooming (and orbiting) orthographic and perspective cameras. I'm pretty green with 3D, so please treat with suspicion! I note that if/when #15075 is merged, `.scale` will go away so this example uses `.scaling_mode`. Closes #2580 |
||
ickshonpe
|
8ac745ab10
|
UI texture slice texture flipping reimplementation (#15034)
# Objective Fixes #15032 ## Solution Reimplement support for the `flip_x` and `flip_y` fields. This doesn't flip the border geometry, I'm not really sure whether that is desirable or not. Also fixes a bug that was causing the side and center slices to tile incorrectly. ### Testing ``` cargo run --example ui_texture_slice_flip_and_tile ``` ## Showcase <img width="787" alt="nearest" src="https://github.com/user-attachments/assets/bc044bae-1748-42ba-92b5-0500c87264f6"> With tiling need to use nearest filtering to avoid bleeding between the slices. --------- Co-authored-by: Jan Hohenheim <jan@hohenheim.ch> Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com> |
||
Patrick Walton
|
d2624765d0
|
Implement animation masks, allowing fine control of the targets that animations affect. (#15013)
This commit adds support for *masks* to the animation graph. A mask is a set of animation targets (bones) that neither a node nor its descendants are allowed to animate. Animation targets can be assigned one or more *mask group*s, which are specific to a single graph. If a node masks out any mask group that an animation target belongs to, animation curves for that target will be ignored during evaluation. The canonical use case for masks is to support characters holding objects. Typically, character animations will contain hand animations in the case that the character's hand is empty. (For example, running animations may close a character's fingers into a fist.) However, when the character is holding an object, the animation must be altered so that the hand grips the object. Bevy currently has no convenient way to handle this. The only workaround that I can see is to have entirely separate animation clips for characters' hands and bodies and keep them in sync, which is burdensome and doesn't match artists' expectations from other engines, which all effectively have support for masks. However, with mask group support, this task is simple. We assign each hand to a mask group and parent all character animations to a node. When a character grasps an object in hand, we position the fingers as appropriate and then enable the mask group for that hand in that node. This allows the character's animations to run normally, while the object remains correctly attached to the hand. Note that even with this PR, we won't have support for running separate animations for a character's hand and the rest of the character. This is because we're missing additive blending: there's no way to combine the two masked animations together properly. I intend that to be a follow-up PR. The major engines all have support for masks, though the workflow varies from engine to engine: * Unity has support for masks [essentially as implemented here], though with layers instead of a tree. However, when using the Mecanim ("Humanoid") feature, precise control over bones is lost in favor of predefined muscle groups. * Unreal has a feature named [*layered blend per bone*]. This allows for separate blend weights for different bones, effectively achieving masks. I believe that the combination of blend nodes and masks make Bevy's animation graph as expressible as that of Unreal, once we have support for additive blending, though you may have to use more nodes than you would in Unreal. Moreover, separating out the concepts of "blend weight" and "which bones this node applies to" seems like a cleaner design than what Unreal has. * Godot's `AnimationTree` has the notion of [*blend filters*], which are essentially the same as masks as implemented in this PR. Additionally, this patch fixes a bug with weight evaluation whereby weights weren't properly propagated down to grandchildren, because the weight evaluation for a node only checked its parent's weight, not its evaluated weight. I considered submitting this as a separate PR, but given that this PR refactors that code entirely to support masks and weights under a unified "evaluated node" concept, I simply included the fix here. A new example, `animation_masks`, has been added. It demonstrates how to toggle masks on and off for specific portions of a skin. This is part of #14395, but I'm going to defer closing that issue until we have additive blending. [essentially as implemented here]: https://docs.unity3d.com/560/Documentation/Manual/class-AvatarMask.html [*layered blend per bone*]: https://dev.epicgames.com/documentation/en-us/unreal-engine/using-layered-animations-in-unreal-engine [*blend filters*]: https://docs.godotengine.org/en/stable/tutorials/animation/animation_tree.html ## Migration Guide * The serialized format of animation graphs has changed with the addition of animation masks. To upgrade animation graph RON files, add `mask` and `mask_groups` fields as appropriate. (They can be safely set to zero.) |
||
charlotte
|
a4640046fc
|
Adds ShaderStorageBuffer asset (#14663)
Adds a new `Handle<Storage>` asset type that can be used as a render asset, particularly for use with `AsBindGroup`. Closes: #13658 # Objective Allow users to create storage buffers in the main world without having to access the `RenderDevice`. While this resource is technically available, it's bad form to use in the main world and requires mixing rendering details with main world code. Additionally, this makes storage buffers easier to use with `AsBindGroup`, particularly in the following scenarios: - Sharing the same buffers between a compute stage and material shader. We already have examples of this for storage textures (see game of life example) and these changes allow a similar pattern to be used with storage buffers. - Preventing repeated gpu upload (see the previous easier to use `Vec` `AsBindGroup` option). - Allow initializing custom materials using `Default`. Previously, the lack of a `Default` implement for the raw `wgpu::Buffer` type made implementing a `AsBindGroup + Default` bound difficult in the presence of buffers. ## Solution Adds a new `Handle<Storage>` asset type that is prepared into a `GpuStorageBuffer` render asset. This asset can either be initialized with a `Vec<u8>` of properly aligned data or with a size hint. Users can modify the underlying `wgpu::BufferDescriptor` to provide additional usage flags. ## Migration Guide The `AsBindGroup` `storage` attribute has been modified to reference the new `Handle<Storage>` asset instead. Usages of Vec` should be converted into assets instead. --------- Co-authored-by: IceSentry <IceSentry@users.noreply.github.com> |
||
JoshValjosh
|
3540b87e17
|
Add bevy_picking sprite backend (#14757)
# Objective Add `bevy_picking` sprite backend as part of the `bevy_mod_picking` upstreamening (#12365). ## Solution More or less a copy/paste from `bevy_mod_picking`, with the changes [here](https://github.com/aevyrie/bevy_mod_picking/pull/354). I'm putting that link here since those changes haven't yet made it through review, so should probably be reviewed on their own. ## Testing I couldn't find any sprite-backend-specific tests in `bevy_mod_picking` and unfortunately I'm not familiar enough with Bevy's testing patterns to write tests for code that relies on windowing and input. I'm willing to break the pointer hit system into testable blocks and add some more modular tests if that's deemed important enough to block, otherwise I can open an issue for adding tests as follow-up. ## Follow-up work - More docs/tests - Ignore pick events on transparent sprite pixels with potential opt-out --------- Co-authored-by: Aevyrie <aevyrie@gmail.com> |
||
Jiří Švejda
|
510fce9af3
|
Allow fog density texture to be scrolled over time with an offset (#14868)
# Objective - The goal of this PR is to make it possible to move the density texture of a `FogVolume` over time in order to create dynamic effects like fog moving in the wind. - You could theoretically move the `FogVolume` itself, but this is not ideal, because the `FogVolume` AABB would eventually leave the area. If you want an area to remain foggy while also creating the impression that the fog is moving in the wind, a scrolling density texture is a better solution. ## Solution - The PR adds a `density_texture_offset` field to the `FogVolume` component. This offset is in the UVW coordinates of the density texture, meaning that a value of `(0.5, 0.0, 0.0)` moves the 3d texture by half along the x-axis. - Values above 1.0 are wrapped, a 1.5 offset is the same as a 0.5 offset. This makes it so that the density texture wraps around on the other side, meaning that a repeating 3d noise texture can seamlessly scroll forever. It also makes it easy to move the density texture over time by simply increasing the offset every frame. ## Testing - A `scrolling_fog` example has been added to demonstrate the feature. It uses the offset to scroll a repeating 3d noise density texture to create the impression of fog moving in the wind. - The camera is looking at a pillar with the sun peaking behind it. This highlights the effect the changing density has on the volumetric lighting interactions. - Temporal anti-aliasing combined with the `jitter` option of `VolumetricFogSettings` is used to improve the quality of the effect. --- ## Showcase https://github.com/user-attachments/assets/3aa50ebd-771c-4c99-ab5d-255c0c3be1a8 |
||
Nihilistas
|
eec38004a8
|
Add example demonstrating how to enable / disable diagnostics (#14741)
# Objective fixes #14569 ## Solution added an example to the diagnostic examples and linked the code to the docs of the diagnostic library itself. ## Testing I tested locally on my laptop in a web browser. Looked fine. You are able to collapse the whole "intro" part of the doc to get to the links sooner (for those who may think that including the example code here is annoying to scroll through) I would like people to run ```cargo doc``` and go the bevy_diagnostic page to see if they have any issues or suggestions. --- ## Showcase <img width="1067" alt="Screenshot 2024-08-14 at 12 52 16" src="https://github.com/user-attachments/assets/70b6c18a-0bb9-4656-ba53-c416f62c6116"> --------- Co-authored-by: dpeke <dpekelis@funstage.com> |
||
TotalKrill
|
6adf31babf
|
hooking up observers and clicking for ui node (#14695)
Makes the newly merged picking usable for UI elements. currently it both triggers the events, as well as sends them as throught commands.trigger_targets. We should probably figure out if this is needed for them all. # Objective Hooks up obserers and picking for a very simple example ## Solution upstreamed the UI picking backend from bevy_mod_picking ## Testing tested with the new example picking/simple_picking.rs --- --------- Co-authored-by: Lixou <82600264+DasLixou@users.noreply.github.com> Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com> Co-authored-by: Kristoffer Søholm <k.soeholm@gmail.com> |
||
IceSentry
|
5abc32ceda
|
Add 2d opaque phase with depth buffer (#13069)
This PR is based on top of #12982 # Objective - Mesh2d currently only has an alpha blended phase. Most sprites don't need transparency though. - For some 2d games it can be useful to have a 2d depth buffer ## Solution - Add an opaque phase to render Mesh2d that don't need transparency - This phase currently uses the `SortedRenderPhase` to make it easier to implement based on the already existing transparent phase. A follow up PR will switch this to `BinnedRenderPhase`. - Add a 2d depth buffer - Use that depth buffer in the transparent phase to make sure that sprites and transparent mesh2d are displayed correctly ## Testing I added the mesh2d_transforms example that layers many opaque and transparent mesh2d to make sure they all get displayed correctly. I also confirmed it works with sprites by modifying that example locally. --- ## Changelog - Added `AlphaMode2d` - Added `Opaque2d` render phase - Camera2d now have a `ViewDepthTexture` component ## Migration Guide - `ColorMaterial` now contains `AlphaMode2d`. To keep previous behaviour, use `AlphaMode::BLEND`. If you know your sprite is opaque, use `AlphaMode::OPAQUE` ## Follow up PRs - See tracking issue: #13265 --------- Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com> Co-authored-by: Christopher Biscardi <chris@christopherbiscardi.com> |
||
charlotte
|
3360b45153
|
Expose winit's MonitorHandle (#13669)
# Objective Adds a new `Monitor` component representing a winit `MonitorHandle` that can be used to spawn new windows and check for system monitor information. Closes #12955. ## Solution For every winit event, check available monitors and spawn them into the world as components. ## Testing TODO: - [x] Test plugging in and unplugging monitor during app runtime - [x] Test spawning a window on a second monitor by entity id - [ ] Since this touches winit, test all platforms --- ## Changelog - Adds a new `Monitor` component that can be queried for information about available system monitors. ## Migration Guide - `WindowMode` variants now take a `MonitorSelection`, which can be set to `MonitorSelection::Primary` to mirror the old behavior. --------- Co-authored-by: Pascal Hertleif <pascal@technocreatives.com> Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com> Co-authored-by: Pascal Hertleif <killercup@gmail.com> |
||
Jan Hohenheim
|
6f7c554daa
|
Fix common capitalization errors in documentation (#14562)
WASM -> Wasm MacOS -> macOS Nothing important, just something that annoyed me for a while :) |
||
IceSentry
|
bfcb19a871
|
Add example showing how to use SpecializedMeshPipeline (#14370)
# Objective - A lot of mid-level rendering apis are hard to figure out because they don't have any examples - SpecializedMeshPipeline can be really useful in some cases when you want more flexibility than a Material without having to go to low level apis. ## Solution - Add an example showing how to make a custom `SpecializedMeshPipeline`. ## Testing - Did you test these changes? If so, how? - Are there any parts that need more testing? - How can other people (reviewers) test your changes? Is there anything specific they need to know? - If relevant, what platforms did you test these changes on, and are there any important ones you can't test? --- ## Showcase The examples just spawns 3 triangles in a triangle pattern. ![image](https://github.com/user-attachments/assets/c3098758-94c4-4775-95e5-1d7c7fb9eb86) --------- Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com> |
||
Aevyrie
|
9575b20d31
|
Track source location in change detection (#14034)
# Objective - Make it possible to know *what* changed your component or resource. - Common need when debugging, when you want to know the last code location that mutated a value in the ECS. - This feature would be very useful for the editor alongside system stepping. ## Solution - Adds the caller location to column data. - Mutations now `track_caller` all the way up to the public API. - Commands that invoke these functions immediately call `Location::caller`, and pass this into the functions, instead of the functions themselves attempting to get the caller. This would not work for commands which are deferred, as the commands are executed by the scheduler, not the user's code. ## Testing - The `component_change_detection` example now shows where the component was mutated: ``` 2024-07-28T06:57:48.946022Z INFO component_change_detection: Entity { index: 1, generation: 1 }: New value: MyComponent(0.0) 2024-07-28T06:57:49.004371Z INFO component_change_detection: Entity { index: 1, generation: 1 }: New value: MyComponent(1.0) 2024-07-28T06:57:49.012738Z WARN component_change_detection: Change detected! -> value: Ref(MyComponent(1.0)) -> added: false -> changed: true -> changed by: examples/ecs/component_change_detection.rs:36:23 ``` - It's also possible to inspect change location from a debugger: <img width="608" alt="image" src="https://github.com/user-attachments/assets/c90ecc7a-0462-457a-80ae-42e7f5d346b4"> --- ## Changelog - Added source locations to ECS change detection behind the `track_change_detection` flag. ## Migration Guide - Added `changed_by` field to many internal ECS functions used with change detection when the `track_change_detection` feature flag is enabled. Use Location::caller() to provide the source of the function call. --------- Co-authored-by: BD103 <59022059+BD103@users.noreply.github.com> Co-authored-by: Gino Valente <49806985+MrGVSV@users.noreply.github.com> |
||
Sou1gh0st
|
9da18cce2a
|
Add support for environment map transformation (#14290)
# Objective - Fixes: https://github.com/bevyengine/bevy/issues/14036 ## Solution - Add a world space transformation for the environment sample direction. ## Testing - I have tested the newly added `transform` field using the newly added `rotate_environment_map` example. https://github.com/user-attachments/assets/2de77c65-14bc-48ee-b76a-fb4e9782dbdb ## Migration Guide - Since we have added a new filed to the `EnvironmentMapLight` struct, users will need to include `..default()` or some rotation value in their initialization code. |
||
Matty
|
3484bd916f
|
Cyclic splines (#14106)
# Objective Fill a gap in the functionality of our curve constructions by allowing users to easily build cyclic curves from control data. ## Solution Here I opted for something lightweight and discoverable. There is a new `CyclicCubicGenerator` trait with a method `to_curve_cyclic` which uses splines' control data to create curves that are cyclic. For now, its signature is exactly like that of `CubicGenerator` — `to_curve_cyclic` just yields a `CubicCurve`: ```rust /// Implement this on cubic splines that can generate a cyclic cubic curve from their spline parameters. /// /// This makes sense only when the control data can be interpreted cyclically. pub trait CyclicCubicGenerator<P: VectorSpace> { /// Build a cyclic [`CubicCurve`] by computing the interpolation coefficients for each curve segment. fn to_curve_cyclic(&self) -> CubicCurve<P>; } ``` This trait has been implemented for `CubicHermite`, `CubicCardinalSpline`, `CubicBSpline`, and `LinearSpline`: <img width="753" alt="Screenshot 2024-07-01 at 8 58 27 PM" src="https://github.com/bevyengine/bevy/assets/2975848/69ae0802-3b78-4fb9-b73a-6f842cf3b33c"> <img width="628" alt="Screenshot 2024-07-01 at 9 00 14 PM" src="https://github.com/bevyengine/bevy/assets/2975848/2992175a-a96c-40fc-b1a1-5206c3572cde"> <img width="606" alt="Screenshot 2024-07-01 at 8 59 36 PM" src="https://github.com/bevyengine/bevy/assets/2975848/9e99eb3a-dbe6-42da-886c-3d3e00410d03"> <img width="603" alt="Screenshot 2024-07-01 at 8 59 01 PM" src="https://github.com/bevyengine/bevy/assets/2975848/d037bc0c-396a-43af-ab5c-fad9a29417ef"> (Each type pictured respectively with the control points rendered as green spheres; tangents not pictured in the case of the Hermite spline.) These curves are all parametrized so that the output of `to_curve` and the output of `to_curve_cyclic` are similar. For instance, in `CubicCardinalSpline`, the first output segment is a curve segment joining the first and second control points in each, although it is constructed differently. In the other cases, the segments from `to_curve` are a subset of those in `to_curve_cyclic`, with the new segments appearing at the end. ## Testing I rendered cyclic splines from control data and made sure they looked reasonable. Existing tests are intact for splines where previous code was modified. (Note that the coefficient computation for cyclic spline segments is almost verbatim identical to that of their non-cyclic counterparts.) The Bezier benchmarks also look fine. --- ## Changelog - Added `CyclicCubicGenerator` trait to `bevy_math::cubic_splines` for creating cyclic curves from control data. - Implemented `CyclicCubicGenerator` for `CubicHermite`, `CubicCardinalSpline`, `CubicBSpline`, and `LinearSpline`. - `bevy_math` now depends on `itertools`. --- ## Discussion ### Design decisions The biggest thing here is just the approach taken in the first place: namely, the cyclic constructions use new methods on the same old structs. This choice was made to reduce friction and increase discoverability but also because creating new ones just seemed unnecessary: the underlying data would have been the same, so creating something like "`CyclicCubicBSpline`" whose internally-held control data is regarded as cyclic in nature doesn't really accomplish much — the end result for the user is basically the same either way. Similarly, I don't presently see a pressing need for `to_curve_cyclic` to output something other than a `CubicCurve`, although changing this in the future may be useful. See below. A notable omission here is that `CyclicCubicGenerator` is not implemented for `CubicBezier`. This is not a gap waiting to be filled — `CubicBezier` just doesn't have enough data to join its start with its end without just making up the requisite control points wholesale. In all the cases where `CyclicCubicGenerator` has been implemented here, the fashion in which the ends are connected is quite natural and follows the semantics of the associated spline construction. ### Future direction There are two main things here: 1. We should investigate whether we should do something similar for NURBS. I just don't know that much about NURBS at the moment, so I regarded this as out of scope for the PR. 2. We may eventually want to change the output type of `CyclicCubicGenerator::to_curve_cyclic` to a type which reifies the cyclic nature of the curve output. This wasn't done in this PR because I'm unsure how much value a type-level guarantee of cyclicity actually has, but if some useful features make sense only in the case of cyclic curves, this might be worth pursuing. |
||
Patrick Walton
|
20c6bcdba4
|
Allow volumetric fog to be localized to specific, optionally voxelized, regions. (#14099)
Currently, volumetric fog is global and affects the entire scene uniformly. This is inadequate for many use cases, such as local smoke effects. To address this problem, this commit introduces *fog volumes*, which are axis-aligned bounding boxes (AABBs) that specify fog parameters inside their boundaries. Such volumes can also specify a *density texture*, a 3D texture of voxels that specifies the density of the fog at each point. To create a fog volume, add a `FogVolume` component to an entity (which is included in the new `FogVolumeBundle` convenience bundle). Like light probes, a fog volume is conceptually a 1×1×1 cube centered on the origin; a transform can be used to position and resize this region. Many of the fields on the existing `VolumetricFogSettings` have migrated to the new `FogVolume` component. `VolumetricFogSettings` on a camera is still needed to enable volumetric fog. However, by itself `VolumetricFogSettings` is no longer sufficient to enable volumetric fog; a `FogVolume` must be present. Applications that wish to retain the old global fog behavior can simply surround the scene with a large fog volume. By way of implementation, this commit converts the volumetric fog shader from a full-screen shader to one applied to a mesh. The strategy is different depending on whether the camera is inside or outside the fog volume. If the camera is inside the fog volume, the mesh is simply a plane scaled to the viewport, effectively falling back to a full-screen pass. If the camera is outside the fog volume, the mesh is a cube transformed to coincide with the boundaries of the fog volume's AABB. Importantly, in the latter case, only the front faces of the cuboid are rendered. Instead of treating the boundaries of the fog as a sphere centered on the camera position, as we did prior to this patch, we raytrace the far planes of the AABB to determine the portion of each ray contained within the fog volume. We then raymarch in shadow map space as usual. If a density texture is present, we modulate the fixed density value with the trilinearly-interpolated value from that texture. Furthermore, this patch introduces optional jitter to fog volumes, intended for use with TAA. This modifies the position of the ray from frame to frame using interleaved gradient noise, in order to reduce aliasing artifacts. Many implementations of volumetric fog in games use this technique. Note that this patch makes no attempt to write a motion vector; this is because when a view ray intersects multiple voxels there's no single direction of motion. Consequently, fog volumes can have ghosting artifacts, but because fog is "ghostly" by its nature, these artifacts are less objectionable than they would be for opaque objects. A new example, `fog_volumes`, has been added. It demonstrates a single fog volume containing a voxelized representation of the Stanford bunny. The existing `volumetric_fog` example has been updated to use the new local volumetrics API. ## Changelog ### Added * Local `FogVolume`s are now supported, to localize fog to specific regions. They can optionally have 3D density voxel textures for precise control over the distribution of the fog. ### Changed * `VolumetricFogSettings` on a camera no longer enables volumetric fog; instead, it simply enables the processing of `FogVolume`s within the scene. ## Migration Guide * A `FogVolume` is now necessary in order to enable volumetric fog, in addition to `VolumetricFogSettings` on the camera. Existing uses of volumetric fog can be migrated by placing a large `FogVolume` surrounding the scene. --------- Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com> Co-authored-by: François Mockers <mockersf@gmail.com> |
||
Chris Biscardi
|
73d7e89a18
|
remove rounded_borders and merge with borders example (#14317)
# Objective The borders example is separate from the rounded borders example. If you find the borders example, you may miss the rounded borders example. ## Solution Merge the examples in a basic way, since there is enough room to show all options at the same time. I also considered renaming the borders and rounded borders examples so that they would be located next to each other in repo and UI, but it felt like having a singular example was better. ## Testing ``` cargo run --example borders ``` --- ## Showcase The merged example looks like this: ![screenshot-2024-07-14-at-13 40 10@2x](https://github.com/user-attachments/assets/0f49cc46-1ca0-40d0-abec-020cbf0fb205) |
||
Gino Valente
|
276815a9a0
|
examples: Add Type Data reflection example (#13903)
# Objective Type data is a **super** useful tool to know about when working with reflection. However, most users don't fully understand how it works or that you can use it for more than just object-safe traits. This is unfortunate because it can be surprisingly simple to manually create your own type data. We should have an example detailing how type works, how users can define their own, and how thy can be used. ## Solution Added a `type_data` example. This example goes through all the major points about type data: - Why we need them - How they can be defined - The two ways they can be registered - A list of common/important type data provided by Bevy I also thought it might be good to go over the `#[reflect_trait]` macro as part of this example since it has all the other context, including how to define type data in places where `#[reflect_trait]` won't work. Because of this, I removed the `trait_reflection` example. ## Testing You can run the example locally with the following command: ``` cargo run --example type_data ``` --- ## Changelog - Added the `type_data` example - Removed the `trait_reflection` example |
||
Patrick Walton
|
fcda67e894
|
Start a built-in postprocessing stack, and implement chromatic aberration in it. (#13695)
This commit creates a new built-in postprocessing shader that's designed to hold miscellaneous postprocessing effects, and starts it off with chromatic aberration. Possible future effects include vignette, film grain, and lens distortion. [Chromatic aberration] is a common postprocessing effect that simulates lenses that fail to focus all colors of light to a single point. It's often used for impact effects and/or horror games. This patch uses the technique from *Inside* ([Gjøl & Svendsen 2016]), which allows the developer to customize the particular color pattern to achieve different effects. Unity HDRP uses the same technique, while Unreal has a hard-wired fixed color pattern. A new example, `post_processing`, has been added, in order to demonstrate the technique. The existing `post_processing` shader has been renamed to `custom_post_processing`, for clarity. [Chromatic aberration]: https://en.wikipedia.org/wiki/Chromatic_aberration [Gjøl & Svendsen 2016]: https://github.com/playdeadgames/publications/blob/master/INSIDE/rendering_inside_gdc2016.pdf ![Screenshot 2024-06-04 180304](https://github.com/bevyengine/bevy/assets/157897/3631c64f-a615-44fe-91ca-7f04df0a54b2) ![Screenshot 2024-06-04 180743](https://github.com/bevyengine/bevy/assets/157897/ee055cbf-4314-49c5-8bfa-8d8a17bd52bb) ## Changelog ### Added * Chromatic aberration is now available as a built-in postprocessing effect. To use it, add `ChromaticAberration` to your camera. |
||
Miles Silberling-Cook
|
ed2b8e0f35
|
Minimal Bubbling Observers (#13991)
# Objective Add basic bubbling to observers, modeled off `bevy_eventlistener`. ## Solution - Introduce a new `Traversal` trait for components which point to other entities. - Provide a default `TraverseNone: Traversal` component which cannot be constructed. - Implement `Traversal` for `Parent`. - The `Event` trait now has an associated `Traversal` which defaults to `TraverseNone`. - Added a field `bubbling: &mut bool` to `Trigger` which can be used to instruct the runner to bubble the event to the entity specified by the event's traversal type. - Added an associated constant `SHOULD_BUBBLE` to `Event` which configures the default bubbling state. - Added logic to wire this all up correctly. Introducing the new associated information directly on `Event` (instead of a new `BubblingEvent` trait) lets us dispatch both bubbling and non-bubbling events through the same api. ## Testing I have added several unit tests to cover the common bugs I identified during development. Running the unit tests should be enough to validate correctness. The changes effect unsafe portions of the code, but should not change any of the safety assertions. ## Changelog Observers can now bubble up the entity hierarchy! To create a bubbling event, change your `Derive(Event)` to something like the following: ```rust #[derive(Component)] struct MyEvent; impl Event for MyEvent { type Traverse = Parent; // This event will propagate up from child to parent. const AUTO_PROPAGATE: bool = true; // This event will propagate by default. } ``` You can dispatch a bubbling event using the normal `world.trigger_targets(MyEvent, entity)`. Halting an event mid-bubble can be done using `trigger.propagate(false)`. Events with `AUTO_PROPAGATE = false` will not propagate by default, but you can enable it using `trigger.propagate(true)`. If there are multiple observers attached to a target, they will all be triggered by bubbling. They all share a bubbling state, which can be accessed mutably using `trigger.propagation_mut()` (`trigger.propagate` is just sugar for this). You can choose to implement `Traversal` for your own types, if you want to bubble along a different structure than provided by `bevy_hierarchy`. Implementers must be careful never to produce loops, because this will cause bevy to hang. ## Migration Guide + Manual implementations of `Event` should add associated type `Traverse = TraverseNone` and associated constant `AUTO_PROPAGATE = false`; + `Trigger::new` has new field `propagation: &mut Propagation` which provides the bubbling state. + `ObserverRunner` now takes the same `&mut Propagation` as a final parameter. --------- Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com> Co-authored-by: Torstein Grindvik <52322338+torsteingrindvik@users.noreply.github.com> Co-authored-by: Carter Anderson <mcanders1@gmail.com> |
||
Sunil Thunga
|
5ffdc0c93f
|
Moves smooth_follow to movement dir (#14249)
# Objective - Moves the smooth_follow.rs into movement directory in examples - Fixes #14241 ## Solution - Move the smooth_follow.rs to movement dir in examples. |
||
Jan Hohenheim
|
d0e606b87c
|
Add an example for doing movement in fixed timesteps (#14223)
_copy-pasted from my doc comment in the code_ # Objective This example shows how to properly handle player input, advance a physics simulation in a fixed timestep, and display the results. The classic source for how and why this is done is Glenn Fiedler's article [Fix Your Timestep!](https://gafferongames.com/post/fix_your_timestep/). ## Motivation The naive way of moving a player is to just update their position like so: ```rust transform.translation += velocity; ``` The issue here is that the player's movement speed will be tied to the frame rate. Faster machines will move the player faster, and slower machines will move the player slower. In fact, you can observe this today when running some old games that did it this way on modern hardware! The player will move at a breakneck pace. The more sophisticated way is to update the player's position based on the time that has passed: ```rust transform.translation += velocity * time.delta_seconds(); ``` This way, velocity represents a speed in units per second, and the player will move at the same speed regardless of the frame rate. However, this can still be problematic if the frame rate is very low or very high. If the frame rate is very low, the player will move in large jumps. This may lead to a player moving in such large jumps that they pass through walls or other obstacles. In general, you cannot expect a physics simulation to behave nicely with *any* delta time. Ideally, we want to have some stability in what kinds of delta times we feed into our physics simulation. The solution is using a fixed timestep. This means that we advance the physics simulation by a fixed amount at a time. If the real time that passed between two frames is less than the fixed timestep, we simply don't advance the physics simulation at all. If it is more, we advance the physics simulation multiple times until we catch up. You can read more about how Bevy implements this in the documentation for [`bevy::time::Fixed`](https://docs.rs/bevy/latest/bevy/time/struct.Fixed.html). This leaves us with a last problem, however. If our physics simulation may advance zero or multiple times per frame, there may be frames in which the player's position did not need to be updated at all, and some where it is updated by a large amount that resulted from running the physics simulation multiple times. This is physically correct, but visually jarring. Imagine a player moving in a straight line, but depending on the frame rate, they may sometimes advance by a large amount and sometimes not at all. Visually, we want the player to move smoothly. This is why we need to separate the player's position in the physics simulation from the player's position in the visual representation. The visual representation can then be interpolated smoothly based on the last and current actual player position in the physics simulation. This is a tradeoff: every visual frame is now slightly lagging behind the actual physical frame, but in return, the player's movement will appear smooth. There are other ways to compute the visual representation of the player, such as extrapolation. See the [documentation of the lightyear crate](https://cbournhonesque.github.io/lightyear/book/concepts/advanced_replication/visual_interpolation.html) for a nice overview of the different methods and their tradeoffs. ## Implementation - The player's velocity is stored in a `Velocity` component. This is the speed in units per second. - The player's current position in the physics simulation is stored in a `PhysicalTranslation` component. - The player's previous position in the physics simulation is stored in a `PreviousPhysicalTranslation` component. - The player's visual representation is stored in Bevy's regular `Transform` component. - Every frame, we go through the following steps: - Advance the physics simulation by one fixed timestep in the `advance_physics` system. This is run in the `FixedUpdate` schedule, which runs before the `Update` schedule. - Update the player's visual representation in the `update_displayed_transform` system. This interpolates between the player's previous and current position in the physics simulation. - Update the player's velocity based on the player's input in the `handle_input` system. ## Relevant Issues Related to #1259. I'm also fairly sure I've seen an issue somewhere made by @alice-i-cecile about showing how to move a character correctly in a fixed timestep, but I cannot find it. |
||
Ben Frankel
|
3452781bf7
|
Deduplicate Wasm optimization instructions (#14173)
See https://github.com/bevyengine/bevy-website/pull/1538 for context. |
||
Gino Valente
|
276dd04001
|
bevy_reflect: Function reflection (#13152)
# Objective
We're able to reflect types sooooooo... why not functions?
The goal of this PR is to make functions callable within a dynamic
context, where type information is not readily available at compile
time.
For example, if we have a function:
```rust
fn add(left: i32, right: i32) -> i32 {
left + right
}
```
And two `Reflect` values we've already validated are `i32` types:
```rust
let left: Box<dyn Reflect> = Box::new(2_i32);
let right: Box<dyn Reflect> = Box::new(2_i32);
```
We should be able to call `add` with these values:
```rust
// ?????
let result: Box<dyn Reflect> = add.call_dynamic(left, right);
```
And ideally this wouldn't just work for functions, but methods and
closures too!
Right now, users have two options:
1. Manually parse the reflected data and call the function themselves
2. Rely on registered type data to handle the conversions for them
For a small function like `add`, this isn't too bad. But what about for
more complex functions? What about for many functions?
At worst, this process is error-prone. At best, it's simply tedious.
And this is assuming we know the function at compile time. What if we
want to accept a function dynamically and call it with our own
arguments?
It would be much nicer if `bevy_reflect` could alleviate some of the
problems here.
## Solution
Added function reflection!
This adds a `DynamicFunction` type to wrap a function dynamically. This
can be called with an `ArgList`, which is a dynamic list of
`Reflect`-containing `Arg` arguments. It returns a `FunctionResult`
which indicates whether or not the function call succeeded, returning a
`Reflect`-containing `Return` type if it did succeed.
Many functions can be converted into this `DynamicFunction` type thanks
to the `IntoFunction` trait.
Taking our previous `add` example, this might look something like
(explicit types added for readability):
```rust
fn add(left: i32, right: i32) -> i32 {
left + right
}
let mut function: DynamicFunction = add.into_function();
let args: ArgList = ArgList::new().push_owned(2_i32).push_owned(2_i32);
let result: Return = function.call(args).unwrap();
let value: Box<dyn Reflect> = result.unwrap_owned();
assert_eq!(value.take::<i32>().unwrap(), 4);
```
And it also works on closures:
```rust
let add = |left: i32, right: i32| left + right;
let mut function: DynamicFunction = add.into_function();
let args: ArgList = ArgList::new().push_owned(2_i32).push_owned(2_i32);
let result: Return = function.call(args).unwrap();
let value: Box<dyn Reflect> = result.unwrap_owned();
assert_eq!(value.take::<i32>().unwrap(), 4);
```
As well as methods:
```rust
#[derive(Reflect)]
struct Foo(i32);
impl Foo {
fn add(&mut self, value: i32) {
self.0 += value;
}
}
let mut foo = Foo(2);
let mut function: DynamicFunction = Foo::add.into_function();
let args: ArgList = ArgList::new().push_mut(&mut foo).push_owned(2_i32);
function.call(args).unwrap();
assert_eq!(foo.0, 4);
```
### Limitations
While this does cover many functions, it is far from a perfect system
and has quite a few limitations. Here are a few of the limitations when
using `IntoFunction`:
1. The lifetime of the return value is only tied to the lifetime of the
first argument (useful for methods). This means you can't have a
function like `(a: i32, b: &i32) -> &i32` without creating the
`DynamicFunction` manually.
2. Only 15 arguments are currently supported. If the first argument is a
(mutable) reference, this number increases to 16.
3. Manual implementations of `Reflect` will need to implement the new
`FromArg`, `GetOwnership`, and `IntoReturn` traits in order to be used
as arguments/return types.
And some limitations of `DynamicFunction` itself:
1. All arguments share the same lifetime, or rather, they will shrink to
the shortest lifetime.
2. Closures that capture their environment may need to have their
`DynamicFunction` dropped before accessing those variables again (there
is a `DynamicFunction::call_once` to make this a bit easier)
3. All arguments and return types must implement `Reflect`. While not a
big surprise coming from `bevy_reflect`, this implementation could
actually still work by swapping `Reflect` out with `Any`. Of course,
that makes working with the arguments and return values a bit harder.
4. Generic functions are not supported (unless they have been manually
monomorphized)
And general, reflection gotchas:
1. `&str` does not implement `Reflect`. Rather, `&'static str`
implements `Reflect` (the same is true for `&Path` and similar types).
This means that `&'static str` is considered an "owned" value for the
sake of generating arguments. Additionally, arguments and return types
containing `&str` will assume it's `&'static str`, which is almost never
the desired behavior. In these cases, the only solution (I believe) is
to use `&String` instead.
### Followup Work
This PR is the first of two PRs I intend to work on. The second PR will
aim to integrate this new function reflection system into the existing
reflection traits and `TypeInfo`. The goal would be to register and call
a reflected type's methods dynamically.
I chose not to do that in this PR since the diff is already quite large.
I also want the discussion for both PRs to be focused on their own
implementation.
Another followup I'd like to do is investigate allowing common container
types as a return type, such as `Option<&[mut] T>` and `Result<&[mut] T,
E>`. This would allow even more functions to opt into this system. I
chose to not include it in this one, though, for the same reasoning as
previously mentioned.
### Alternatives
One alternative I had considered was adding a macro to convert any
function into a reflection-based counterpart. The idea would be that a
struct that wraps the function would be created and users could specify
which arguments and return values should be `Reflect`. It could then be
called via a new `Function` trait.
I think that could still work, but it will be a fair bit more involved,
requiring some slightly more complex parsing. And it of course is a bit
more work for the user, since they need to create the type via macro
invocation.
It also makes registering these functions onto a type a bit more
complicated (depending on how it's implemented).
For now, I think this is a fairly simple, yet powerful solution that
provides the least amount of friction for users.
---
## Showcase
Bevy now adds support for storing and calling functions dynamically
using reflection!
```rust
// 1. Take a standard Rust function
fn add(left: i32, right: i32) -> i32 {
left + right
}
// 2. Convert it into a type-erased `DynamicFunction` using the `IntoFunction` trait
let mut function: DynamicFunction = add.into_function();
// 3. Define your arguments from reflected values
let args: ArgList = ArgList::new().push_owned(2_i32).push_owned(2_i32);
// 4. Call the function with your arguments
let result: Return = function.call(args).unwrap();
// 5. Extract the return value
let value: Box<dyn Reflect> = result.unwrap_owned();
assert_eq!(value.take::<i32>().unwrap(), 4);
```
## Changelog
#### TL;DR
- Added support for function reflection
- Added a new `Function Reflection` example:
|
||
Patrick Walton
|
44db8b7fac
|
Allow phase items not associated with meshes to be binned. (#14029)
As reported in #14004, many third-party plugins, such as Hanabi, enqueue entities that don't have meshes into render phases. However, the introduction of indirect mode added a dependency on mesh-specific data, breaking this workflow. This is because GPU preprocessing requires that the render phases manage indirect draw parameters, which don't apply to objects that aren't meshes. The existing code skips over binned entities that don't have indirect draw parameters, which causes the rendering to be skipped for such objects. To support this workflow, this commit adds a new field, `non_mesh_items`, to `BinnedRenderPhase`. This field contains a simple list of (bin key, entity) pairs. After drawing batchable and unbatchable objects, the non-mesh items are drawn one after another. Bevy itself doesn't enqueue any items into this list; it exists solely for the application and/or plugins to use. Additionally, this commit switches the asset ID in the standard bin keys to be an untyped asset ID rather than that of a mesh. This allows more flexibility, allowing bins to be keyed off any type of asset. This patch adds a new example, `custom_phase_item`, which simultaneously serves to demonstrate how to use this new feature and to act as a regression test so this doesn't break again. Fixes #14004. ## Changelog ### Added * `BinnedRenderPhase` now contains a `non_mesh_items` field for plugins to add custom items to. |
||
Jan Hohenheim
|
48f70789f5
|
Add first person view model example (#13828)
# Objective A very common way to organize a first-person view is to split it into two kinds of models: - The *view model* is the model that represents the player's body. - The *world model* is everything else. The reason for this distinction is that these two models should be rendered with different FOVs. The view model is typically designed and animated with a very specific FOV in mind, so it is generally *fixed* and cannot be changed by a player. The world model, on the other hand, should be able to change its FOV to accommodate the player's preferences for the following reasons: - *Accessibility*: How prone is the player to motion sickness? A wider FOV can help. - *Tactical preference*: Does the player want to see more of the battlefield? Or have a more zoomed-in view for precision aiming? - *Physical considerations*: How well does the in-game FOV match the player's real-world FOV? Are they sitting in front of a monitor or playing on a TV in the living room? How big is the screen? ## Solution I've added an example implementing the described setup as follows. The `Player` is an entity holding two cameras, one for each model. The view model camera has a fixed FOV of 70 degrees, while the world model camera has a variable FOV that can be changed by the player. I use different `RenderLayers` to select what to render. - The world model camera has no explicit `RenderLayers` component, so it uses the layer 0. All static objects in the scene are also on layer 0 for the same reason. - The view model camera has a `RenderLayers` component with layer 1, so it only renders objects explicitly assigned to layer 1. The arm of the player is one such object. The order of the view model camera is additionally bumped to 1 to ensure it renders on top of the world model. - The light source in the scene must illuminate both the view model and the world model, so it is assigned to both layers 0 and 1. To better see the effect, the player can move the camera by dragging their mouse and change the world model's FOV with the arrow keys. The arrow up key maps to "decrease FOV" and the arrow down key maps to "increase FOV". This sounds backwards on paper, but is more intuitive when actually changing the FOV in-game since a decrease in FOV looks like a zoom-in. I intentionally do not allow changing the view model's FOV even though it would be illustrative because that would be an anti-pattern and bloat the code a bit. The example is called `first_person_view_model` and not just `first_person` because I want to highlight that this is not a simple flycam, but actually renders the player. ## Testing Default FOV: <img width="1392" alt="image" src="https://github.com/bevyengine/bevy/assets/9047632/8c2e804f-fac2-48c7-8a22-d85af999dfb2"> Decreased FOV: <img width="1392" alt="image" src="https://github.com/bevyengine/bevy/assets/9047632/1733b3e5-f583-4214-a454-3554e3cbd066"> Increased FOV: <img width="1392" alt="image" src="https://github.com/bevyengine/bevy/assets/9047632/0b0640e6-5743-46f6-a79a-7181ba9678e8"> Note that the white bar on the right represents the player's arm, which is more obvious in-game because you can move the camera around. The box on top is there to make sure that the view model is receiving shadows. I tested only on macOS. --- ## Changelog I don't think new examples go in here, do they? ## Caveat The solution used here was implemented with help by @robtfm on [Discord](https://discord.com/channels/691052431525675048/866787577687310356/1241019224491561000): > shadow maps are specific to lights, not to layers > if you want shadows from some meshes that are not visible, you could have light on layer 1+2, meshes on layer 2, camera on layer 1 (for example) > but this might change in future, it's not exactly an intended feature In other words, the example code as-is is not guaranteed to work in the future. I want to bring this up because the use-case presented here is extremely common in first-person games and important for accessibility. It would be good to have a blessed and easy way of how to achieve it. I'm also not happy about how I get the `perspective` variable in `change_fov`. Very open to suggestions :) ## Related issues - Addresses parts of #12658 - Addresses parts of #12588 --------- Co-authored-by: Pascal Hertleif <killercup@gmail.com> |