bevy/docs/cargo_features.md

114 lines
7.2 KiB
Markdown
Raw Normal View History

<!-- MD041 - This file will be included in docs and should not start with a top header -->
<!-- markdownlint-disable-file MD041 -->
## Cargo Features
Bevy exposes many features to customize the engine. Enabling them add functionalities but often come at the cost of longer compilation times and extra dependencies.
### Default Features
The default feature set enables most of the expected features of a game engine, like rendering in both 2D and 3D, asset loading, audio and UI. To help reduce compilation time, consider disabling default features and enabling only those you need.
|feature name|description|
|-|-|
|android-game-activity|Android GameActivity support. Default, choose between this and `android-native-activity`.|
|android_shared_stdcxx|Enable using a shared stdlib for cxx on Android|
|animation|Enable animation support, and glTF animation loading|
|bevy_animation|Provides animation functionality|
|bevy_asset|Provides asset functionality|
|bevy_audio|Provides audio functionality|
|bevy_color|Provides shared color types and operations|
|bevy_core_pipeline|Provides cameras and other basic render pipeline features|
|bevy_gilrs|Adds gamepad support|
Immediate Mode Line/Gizmo Drawing (#6529) # Objective Add a convenient immediate mode drawing API for visual debugging. Fixes #5619 Alternative to #1625 Partial alternative to #5734 Based off https://github.com/Toqozz/bevy_debug_lines with some changes: * Simultaneous support for 2D and 3D. * Methods for basic shapes; circles, spheres, rectangles, boxes, etc. * 2D methods. * Removed durations. Seemed niche, and can be handled by users. <details> <summary>Performance</summary> Stress tested using Bevy's recommended optimization settings for the dev profile with the following command. ```bash cargo run --example many_debug_lines \ --config "profile.dev.package.\"*\".opt-level=3" \ --config "profile.dev.opt-level=1" ``` I dipped to 65-70 FPS at 300,000 lines CPU: 3700x RAM Speed: 3200 Mhz GPU: 2070 super - probably not very relevant, mostly cpu/memory bound </details> <details> <summary>Fancy bloom screenshot</summary> ![Screenshot_20230207_155033](https://user-images.githubusercontent.com/29694403/217291980-f1e0500e-7a14-4131-8c96-eaaaf52596ae.png) </details> ## Changelog * Added `GizmoPlugin` * Added `Gizmos` system parameter for drawing lines and wireshapes. ### TODO - [ ] Update changelog - [x] Update performance numbers - [x] Add credit to PR description ### Future work - Cache rendering primitives instead of constructing them out of line segments each frame. - Support for drawing solid meshes - Interactions. (See [bevy_mod_gizmos](https://github.com/LiamGallagher737/bevy_mod_gizmos)) - Fancier line drawing. (See [bevy_polyline](https://github.com/ForesightMiningSoftwareCorporation/bevy_polyline)) - Support for `RenderLayers` - Display gizmos for a certain duration. Currently everything displays for one frame (ie. immediate mode) - Changing settings per drawn item like drawing on top or drawing to different `RenderLayers` Co-Authored By: @lassade <felipe.jorge.pereira@gmail.com> Co-Authored By: @The5-1 <agaku@hotmail.de> Co-Authored By: @Toqozz <toqoz@hotmail.com> Co-Authored By: @nicopap <nico@nicopap.ch> --------- Co-authored-by: Robert Swain <robert.swain@gmail.com> Co-authored-by: IceSentry <c.giguere42@gmail.com> Co-authored-by: Carter Anderson <mcanders1@gmail.com>
2023-03-20 20:57:54 +00:00
|bevy_gizmos|Adds support for rendering gizmos|
|bevy_gltf|[glTF](https://www.khronos.org/gltf/) support|
Add mesh picking backend and `MeshRayCast` system parameter (#15800) # Objective Closes #15545. `bevy_picking` supports UI and sprite picking, but not mesh picking. Being able to pick meshes would be extremely useful for various games, tools, and our own examples, as well as scene editors and inspectors. So, we need a mesh picking backend! Luckily, [`bevy_mod_picking`](https://github.com/aevyrie/bevy_mod_picking) (which `bevy_picking` is based on) by @aevyrie already has a [backend for it](https://github.com/aevyrie/bevy_mod_picking/blob/74f0c3c0fbc8048632ba46fd8f14e26aaea9c76c/backends/bevy_picking_raycast/src/lib.rs) using [`bevy_mod_raycast`](https://github.com/aevyrie/bevy_mod_raycast). As a side product of adding mesh picking, we also get support for performing ray casts on meshes! ## Solution Upstream a large chunk of the immediate-mode ray casting functionality from `bevy_mod_raycast`, and add a mesh picking backend based on `bevy_mod_picking`. Huge thanks to @aevyrie who did all the hard work on these incredible crates! All meshes are pickable by default. Picking can be disabled for individual entities by adding `PickingBehavior::IGNORE`, like normal. Or, if you want mesh picking to be entirely opt-in, you can set `MeshPickingBackendSettings::require_markers` to `true` and add a `RayCastPickable` component to the desired camera and target entities. You can also use the new `MeshRayCast` system parameter to cast rays into the world manually: ```rust fn ray_cast_system(mut ray_cast: MeshRayCast, foo_query: Query<(), With<Foo>>) { let ray = Ray3d::new(Vec3::ZERO, Dir3::X); // Only ray cast against entities with the `Foo` component. let filter = |entity| foo_query.contains(entity); // Never early-exit. Note that you can change behavior per-entity. let early_exit_test = |_entity| false; // Ignore the visibility of entities. This allows ray casting hidden entities. let visibility = RayCastVisibility::Any; let settings = RayCastSettings::default() .with_filter(&filter) .with_early_exit_test(&early_exit_test) .with_visibility(visibility); // Cast the ray with the settings, returning a list of intersections. let hits = ray_cast.cast_ray(ray, &settings); } ``` This is largely a direct port, but I did make several changes to match our APIs better, remove things we don't need or that I think are unnecessary, and do some general improvements to code quality and documentation. ### Changes Relative to `bevy_mod_raycast` and `bevy_mod_picking` - Every `Raycast` and "raycast" has been renamed to `RayCast` and "ray cast" (similar reasoning as the "Naming" section in #15724) - `Raycast` system param has been renamed to `MeshRayCast` to avoid naming conflicts and to be explicit that it is not for colliders - `RaycastBackend` has been renamed to `MeshPickingBackend` - `RayCastVisibility` variants are now `Any`, `Visible`, and `VisibleInView` instead of `Ignore`, `MustBeVisible`, and `MustBeVisibleAndInView` - `NoBackfaceCulling` has been renamed to `RayCastBackfaces`, to avoid implying that it affects the rendering of backfaces for meshes (it doesn't) - `SimplifiedMesh` and `RayCastBackfaces` live near other ray casting API types, not in their own 10 LoC module - All intersection logic and types are in the same `intersections` module, not split across several modules - Some intersection types have been renamed to be clearer and more consistent - `IntersectionData` -> `RayMeshHit` - `RayHit` -> `RayTriangleHit` - General documentation and code quality improvements ### Removed / Not Ported - Removed unused ray helpers and types, like `PrimitiveIntersection` - Removed getters on intersection types, and made their properties public - There is no `2d` feature, and `Raycast::mesh_query` and `Raycast::mesh2d_query` have been merged into `MeshRayCast::mesh_query`, which handles both 2D and 3D - I assume this existed previously because `Mesh2dHandle` used to be in `bevy_sprite`. Now both the 2D and 3D mesh are in `bevy_render`. - There is no `debug` feature or ray debug rendering - There is no deferred API (`RaycastSource`) - There is no `CursorRayPlugin` (the picking backend handles this) ### Note for Reviewers In case it's helpful, the [first commit](https://github.com/bevyengine/bevy/commit/281638ef10ebd25f783bee2533509fdf756f7ab4) here is essentially a one-to-one port. The rest of the commits are primarily refactoring and cleaning things up in the ways listed earlier, as well as changes to the module structure. It may also be useful to compare the original [picking backend](https://github.com/aevyrie/bevy_mod_picking/blob/74f0c3c0fbc8048632ba46fd8f14e26aaea9c76c/backends/bevy_picking_raycast/src/lib.rs) and [`bevy_mod_raycast`](https://github.com/aevyrie/bevy_mod_raycast) to this PR. Feel free to mention if there are any changes that I should revert or something I should not include in this PR. ## Testing I tested mesh picking and relevant components in some examples, for both 2D and 3D meshes, and added a new `mesh_picking` example. I also ~~stole~~ ported over the [ray-mesh intersection benchmark](https://github.com/aevyrie/bevy_mod_raycast/blob/dbc5ef32fe48997a1a7eeec7434d9dd8b829e52e/benches/ray_mesh_intersection.rs) from `bevy_mod_raycast`. --- ## Showcase Below is a version of the `2d_shapes` example modified to demonstrate 2D mesh picking. This is not included in this PR. https://github.com/user-attachments/assets/7742528c-8630-4c00-bacd-81576ac432bf And below is the new `mesh_picking` example: https://github.com/user-attachments/assets/b65c7a5a-fa3a-4c2d-8bbd-e7a2c772986e There is also a really cool new `mesh_ray_cast` example ported over from `bevy_mod_raycast`: https://github.com/user-attachments/assets/3c5eb6c0-bd94-4fb0-bec6-8a85668a06c9 --------- Co-authored-by: Aevyrie <aevyrie@gmail.com> Co-authored-by: Trent <2771466+tbillington@users.noreply.github.com> Co-authored-by: François Mockers <mockersf@gmail.com>
2024-10-13 17:24:19 +00:00
|bevy_mesh_picking_backend|Provides an implementation for picking meshes|
|bevy_pbr|Adds PBR rendering|
|bevy_picking|Provides picking functionality|
|bevy_render|Provides rendering functionality|
|bevy_scene|Provides scene functionality|
|bevy_sprite|Provides sprite functionality|
|bevy_sprite_picking_backend|Provides an implementation for picking sprites|
|bevy_state|Enable built in global state machines|
|bevy_text|Provides text functionality|
|bevy_ui|A custom ECS-driven UI framework|
|bevy_ui_picking_backend|Provides an implementation for picking UI|
|bevy_window|Windowing layer|
|bevy_winit|winit window and input backend|
|custom_cursor|Enable winit custom cursor support|
|default_font|Include a default font, containing only ASCII characters, at the cost of a 20kB binary size increase|
|hdr|HDR image format support|
|ktx2|KTX2 compressed texture support|
|multi_threaded|Enables multithreaded parallelism in the engine. Disabling it forces all engine tasks to run on a single thread.|
|png|PNG image format support|
|smaa_luts|Include SMAA Look Up Tables KTX2 Files|
|sysinfo_plugin|Enables system information diagnostic plugin|
|tonemapping_luts|Include tonemapping Look Up Tables KTX2 files. If everything is pink, you need to enable this feature or change the `Tonemapping` method for your `Camera2d` or `Camera3d`.|
|vorbis|OGG/VORBIS audio format support|
Update to wgpu 0.19 and raw-window-handle 0.6 (#11280) # Objective Keep core dependencies up to date. ## Solution Update the dependencies. wgpu 0.19 only supports raw-window-handle (rwh) 0.6, so bumping that was included in this. The rwh 0.6 version bump is just the simplest way of doing it. There might be a way we can take advantage of wgpu's new safe surface creation api, but I'm not familiar enough with bevy's window management to untangle it and my attempt ended up being a mess of lifetimes and rustc complaining about missing trait impls (that were implemented). Thanks to @MiniaczQ for the (much simpler) rwh 0.6 version bump code. Unblocks https://github.com/bevyengine/bevy/pull/9172 and https://github.com/bevyengine/bevy/pull/10812 ~~This might be blocked on cpal and oboe updating their ndk versions to 0.8, as they both currently target ndk 0.7 which uses rwh 0.5.2~~ Tested on android, and everything seems to work correctly (audio properly stops when minimized, and plays when re-focusing the app). --- ## Changelog - `wgpu` has been updated to 0.19! The long awaited arcanization has been merged (for more info, see https://gfx-rs.github.io/2023/11/24/arcanization.html), and Vulkan should now be working again on Intel GPUs. - Targeting WebGPU now requires that you add the new `webgpu` feature (setting the `RUSTFLAGS` environment variable to `--cfg=web_sys_unstable_apis` is still required). This feature currently overrides the `webgl2` feature if you have both enabled (the `webgl2` feature is enabled by default), so it is not recommended to add it as a default feature to libraries without putting it behind a flag that allows library users to opt out of it! In the future we plan on supporting wasm binaries that can target both webgl2 and webgpu now that wgpu added support for doing so (see https://github.com/bevyengine/bevy/issues/11505). - `raw-window-handle` has been updated to version 0.6. ## Migration Guide - `bevy_render::instance_index::get_instance_index()` has been removed as the webgl2 workaround is no longer required as it was fixed upstream in wgpu. The `BASE_INSTANCE_WORKAROUND` shaderdef has also been removed. - WebGPU now requires the new `webgpu` feature to be enabled. The `webgpu` feature currently overrides the `webgl2` feature so you no longer need to disable all default features and re-add them all when targeting `webgpu`, but binaries built with both the `webgpu` and `webgl2` features will only target the webgpu backend, and will only work on browsers that support WebGPU. - Places where you conditionally compiled things for webgl2 need to be updated because of this change, eg: - `#[cfg(any(not(feature = "webgl"), not(target_arch = "wasm32")))]` becomes `#[cfg(any(not(feature = "webgl") ,not(target_arch = "wasm32"), feature = "webgpu"))]` - `#[cfg(all(feature = "webgl", target_arch = "wasm32"))]` becomes `#[cfg(all(feature = "webgl", target_arch = "wasm32", not(feature = "webgpu")))]` - `if cfg!(all(feature = "webgl", target_arch = "wasm32"))` becomes `if cfg!(all(feature = "webgl", target_arch = "wasm32", not(feature = "webgpu")))` - `create_texture_with_data` now also takes a `TextureDataOrder`. You can probably just set this to `TextureDataOrder::default()` - `TextureFormat`'s `block_size` has been renamed to `block_copy_size` - See the `wgpu` changelog for anything I might've missed: https://github.com/gfx-rs/wgpu/blob/trunk/CHANGELOG.md --------- Co-authored-by: François <mockersf@gmail.com>
2024-01-26 18:14:21 +00:00
|webgl2|Enable some limitations to be able to use WebGL2. Please refer to the [WebGL2 and WebGPU](https://github.com/bevyengine/bevy/tree/latest/examples#webgl2-and-webgpu) section of the examples README for more information on how to run Wasm builds with WebGPU.|
|x11|X11 display server support|
|zstd|For KTX2 supercompression|
### Optional Features
|feature name|description|
|-|-|
|accesskit_unix|Enable AccessKit on Unix backends (currently only works with experimental screen readers and forks.)|
|android-native-activity|Android NativeActivity support. Legacy, should be avoided for most new Android games.|
|asset_processor|Enables the built-in asset processor for processed assets.|
|async-io|Use async-io's implementation of block_on instead of futures-lite's implementation. This is preferred if your application uses async-io.|
|basis-universal|Basis Universal compressed texture support|
|bevy_ci_testing|Enable systems that allow for automated testing on CI|
|bevy_debug_stepping|Enable stepping-based debugging of Bevy systems|
|bevy_dev_tools|Provides a collection of developer tools|
|bevy_remote|Enable the Bevy Remote Protocol|
|bmp|BMP image format support|
|dds|DDS compressed texture support|
|debug_glam_assert|Enable assertions in debug builds to check the validity of parameters passed to glam|
|detailed_trace|Enable detailed trace event logging. These trace events are expensive even when off, thus they require compile time opt-in|
|dynamic_linking|Force dynamic linking, which improves iterative compile times|
Multiple Asset Sources (#9885) This adds support for **Multiple Asset Sources**. You can now register a named `AssetSource`, which you can load assets from like you normally would: ```rust let shader: Handle<Shader> = asset_server.load("custom_source://path/to/shader.wgsl"); ``` Notice that `AssetPath` now supports `some_source://` syntax. This can now be accessed through the `asset_path.source()` accessor. Asset source names _are not required_. If one is not specified, the default asset source will be used: ```rust let shader: Handle<Shader> = asset_server.load("path/to/shader.wgsl"); ``` The behavior of the default asset source has not changed. Ex: the `assets` folder is still the default. As referenced in #9714 ## Why? **Multiple Asset Sources** enables a number of often-asked-for scenarios: * **Loading some assets from other locations on disk**: you could create a `config` asset source that reads from the OS-default config folder (not implemented in this PR) * **Loading some assets from a remote server**: you could register a new `remote` asset source that reads some assets from a remote http server (not implemented in this PR) * **Improved "Binary Embedded" Assets**: we can use this system for "embedded-in-binary assets", which allows us to replace the old `load_internal_asset!` approach, which couldn't support asset processing, didn't support hot-reloading _well_, and didn't make embedded assets accessible to the `AssetServer` (implemented in this pr) ## Adding New Asset Sources An `AssetSource` is "just" a collection of `AssetReader`, `AssetWriter`, and `AssetWatcher` entries. You can configure new asset sources like this: ```rust app.register_asset_source( "other", AssetSource::build() .with_reader(|| Box::new(FileAssetReader::new("other"))) ) ) ``` Note that `AssetSource` construction _must_ be repeatable, which is why a closure is accepted. `AssetSourceBuilder` supports `with_reader`, `with_writer`, `with_watcher`, `with_processed_reader`, `with_processed_writer`, and `with_processed_watcher`. Note that the "asset source" system replaces the old "asset providers" system. ## Processing Multiple Sources The `AssetProcessor` now supports multiple asset sources! Processed assets can refer to assets in other sources and everything "just works". Each `AssetSource` defines an unprocessed and processed `AssetReader` / `AssetWriter`. Currently this is all or nothing for a given `AssetSource`. A given source is either processed or it is not. Later we might want to add support for "lazy asset processing", where an `AssetSource` (such as a remote server) can be configured to only process assets that are directly referenced by local assets (in order to save local disk space and avoid doing extra work). ## A new `AssetSource`: `embedded` One of the big features motivating **Multiple Asset Sources** was improving our "embedded-in-binary" asset loading. To prove out the **Multiple Asset Sources** implementation, I chose to build a new `embedded` `AssetSource`, which replaces the old `load_interal_asset!` system. The old `load_internal_asset!` approach had a number of issues: * The `AssetServer` was not aware of (or capable of loading) internal assets. * Because internal assets weren't visible to the `AssetServer`, they could not be processed (or used by assets that are processed). This would prevent things "preprocessing shaders that depend on built in Bevy shaders", which is something we desperately need to start doing. * Each "internal asset" needed a UUID to be defined in-code to reference it. This was very manual and toilsome. The new `embedded` `AssetSource` enables the following pattern: ```rust // Called in `crates/bevy_pbr/src/render/mesh.rs` embedded_asset!(app, "mesh.wgsl"); // later in the app let shader: Handle<Shader> = asset_server.load("embedded://bevy_pbr/render/mesh.wgsl"); ``` Notice that this always treats the crate name as the "root path", and it trims out the `src` path for brevity. This is generally predictable, but if you need to debug you can use the new `embedded_path!` macro to get a `PathBuf` that matches the one used by `embedded_asset`. You can also reference embedded assets in arbitrary assets, such as WGSL shaders: ```rust #import "embedded://bevy_pbr/render/mesh.wgsl" ``` This also makes `embedded` assets go through the "normal" asset lifecycle. They are only loaded when they are actually used! We are also discussing implicitly converting asset paths to/from shader modules, so in the future (not in this PR) you might be able to load it like this: ```rust #import bevy_pbr::render::mesh::Vertex ``` Compare that to the old system! ```rust pub const MESH_SHADER_HANDLE: Handle<Shader> = Handle::weak_from_u128(3252377289100772450); load_internal_asset!(app, MESH_SHADER_HANDLE, "mesh.wgsl", Shader::from_wgsl); // The mesh asset is the _only_ accessible via MESH_SHADER_HANDLE and _cannot_ be loaded via the AssetServer. ``` ## Hot Reloading `embedded` You can enable `embedded` hot reloading by enabling the `embedded_watcher` cargo feature: ``` cargo run --features=embedded_watcher ``` ## Improved Hot Reloading Workflow First: the `filesystem_watcher` cargo feature has been renamed to `file_watcher` for brevity (and to match the `FileAssetReader` naming convention). More importantly, hot asset reloading is no longer configured in-code by default. If you enable any asset watcher feature (such as `file_watcher` or `rust_source_watcher`), asset watching will be automatically enabled. This removes the need to _also_ enable hot reloading in your app code. That means you can replace this: ```rust app.add_plugins(DefaultPlugins.set(AssetPlugin::default().watch_for_changes())) ``` with this: ```rust app.add_plugins(DefaultPlugins) ``` If you want to hot reload assets in your app during development, just run your app like this: ``` cargo run --features=file_watcher ``` This means you can use the same code for development and deployment! To deploy an app, just don't include the watcher feature ``` cargo build --release ``` My intent is to move to this approach for pretty much all dev workflows. In a future PR I would like to replace `AssetMode::ProcessedDev` with a `runtime-processor` cargo feature. We could then group all common "dev" cargo features under a single `dev` feature: ```sh # this would enable file_watcher, embedded_watcher, runtime-processor, and more cargo run --features=dev ``` ## AssetMode `AssetPlugin::Unprocessed`, `AssetPlugin::Processed`, and `AssetPlugin::ProcessedDev` have been replaced with an `AssetMode` field on `AssetPlugin`. ```rust // before app.add_plugins(DefaultPlugins.set(AssetPlugin::Processed { /* fields here */ }) // after app.add_plugins(DefaultPlugins.set(AssetPlugin { mode: AssetMode::Processed, ..default() }) ``` This aligns `AssetPlugin` with our other struct-like plugins. The old "source" and "destination" `AssetProvider` fields in the enum variants have been replaced by the "asset source" system. You no longer need to configure the AssetPlugin to "point" to custom asset providers. ## AssetServerMode To improve the implementation of **Multiple Asset Sources**, `AssetServer` was made aware of whether or not it is using "processed" or "unprocessed" assets. You can check that like this: ```rust if asset_server.mode() == AssetServerMode::Processed { /* do something */ } ``` Note that this refactor should also prepare the way for building "one to many processed output files", as it makes the server aware of whether it is loading from processed or unprocessed sources. Meaning we can store and read processed and unprocessed assets differently! ## AssetPath can now refer to folders The "file only" restriction has been removed from `AssetPath`. The `AssetServer::load_folder` API now accepts an `AssetPath` instead of a `Path`, meaning you can load folders from other asset sources! ## Improved AssetPath Parsing AssetPath parsing was reworked to support sources, improve error messages, and to enable parsing with a single pass over the string. `AssetPath::new` was replaced by `AssetPath::parse` and `AssetPath::try_parse`. ## AssetWatcher broken out from AssetReader `AssetReader` is no longer responsible for constructing `AssetWatcher`. This has been moved to `AssetSourceBuilder`. ## Duplicate Event Debouncing Asset V2 already debounced duplicate filesystem events, but this was _input_ events. Multiple input event types can produce the same _output_ `AssetSourceEvent`. Now that we have `embedded_watcher`, which does expensive file io on events, it made sense to debounce output events too, so I added that! This will also benefit the AssetProcessor by preventing integrity checks for duplicate events (and helps keep the noise down in trace logs). ## Next Steps * **Port Built-in Shaders**: Currently the primary (and essentially only) user of `load_interal_asset` in Bevy's source code is "built-in shaders". I chose not to do that in this PR for a few reasons: 1. We need to add the ability to pass shader defs in to shaders via meta files. Some shaders (such as MESH_VIEW_TYPES) need to pass shader def values in that are defined in code. 2. We need to revisit the current shader module naming system. I think we _probably_ want to imply modules from source structure (at least by default). Ideally in a way that can losslessly convert asset paths to/from shader modules (to enable the asset system to resolve modules using the asset server). 3. I want to keep this change set minimal / get this merged first. * **Deprecate `load_internal_asset`**: we can't do that until we do (1) and (2) * **Relative Asset Paths**: This PR significantly increases the need for relative asset paths (which was already pretty high). Currently when loading dependencies, it is assumed to be an absolute path, which means if in an `AssetLoader` you call `context.load("some/path/image.png")` it will assume that is the "default" asset source, _even if the current asset is in a different asset source_. This will cause breakage for AssetLoaders that are not designed to add the current source to whatever paths are being used. AssetLoaders should generally not need to be aware of the name of their current asset source, or need to think about the "current asset source" generally. We should build apis that support relative asset paths and then encourage using relative paths as much as possible (both via api design and docs). Relative paths are also important because they will allow developers to move folders around (even across providers) without reprocessing, provided there is no path breakage.
2023-10-13 23:17:32 +00:00
|embedded_watcher|Enables watching in memory asset providers for Bevy Asset hot-reloading|
|experimental_pbr_pcss|Enable support for PCSS, at the risk of blowing past the global, per-shader sampler limit on older/lower-end GPUs|
|exr|EXR image format support|
|ff|Farbfeld image format support|
Multiple Asset Sources (#9885) This adds support for **Multiple Asset Sources**. You can now register a named `AssetSource`, which you can load assets from like you normally would: ```rust let shader: Handle<Shader> = asset_server.load("custom_source://path/to/shader.wgsl"); ``` Notice that `AssetPath` now supports `some_source://` syntax. This can now be accessed through the `asset_path.source()` accessor. Asset source names _are not required_. If one is not specified, the default asset source will be used: ```rust let shader: Handle<Shader> = asset_server.load("path/to/shader.wgsl"); ``` The behavior of the default asset source has not changed. Ex: the `assets` folder is still the default. As referenced in #9714 ## Why? **Multiple Asset Sources** enables a number of often-asked-for scenarios: * **Loading some assets from other locations on disk**: you could create a `config` asset source that reads from the OS-default config folder (not implemented in this PR) * **Loading some assets from a remote server**: you could register a new `remote` asset source that reads some assets from a remote http server (not implemented in this PR) * **Improved "Binary Embedded" Assets**: we can use this system for "embedded-in-binary assets", which allows us to replace the old `load_internal_asset!` approach, which couldn't support asset processing, didn't support hot-reloading _well_, and didn't make embedded assets accessible to the `AssetServer` (implemented in this pr) ## Adding New Asset Sources An `AssetSource` is "just" a collection of `AssetReader`, `AssetWriter`, and `AssetWatcher` entries. You can configure new asset sources like this: ```rust app.register_asset_source( "other", AssetSource::build() .with_reader(|| Box::new(FileAssetReader::new("other"))) ) ) ``` Note that `AssetSource` construction _must_ be repeatable, which is why a closure is accepted. `AssetSourceBuilder` supports `with_reader`, `with_writer`, `with_watcher`, `with_processed_reader`, `with_processed_writer`, and `with_processed_watcher`. Note that the "asset source" system replaces the old "asset providers" system. ## Processing Multiple Sources The `AssetProcessor` now supports multiple asset sources! Processed assets can refer to assets in other sources and everything "just works". Each `AssetSource` defines an unprocessed and processed `AssetReader` / `AssetWriter`. Currently this is all or nothing for a given `AssetSource`. A given source is either processed or it is not. Later we might want to add support for "lazy asset processing", where an `AssetSource` (such as a remote server) can be configured to only process assets that are directly referenced by local assets (in order to save local disk space and avoid doing extra work). ## A new `AssetSource`: `embedded` One of the big features motivating **Multiple Asset Sources** was improving our "embedded-in-binary" asset loading. To prove out the **Multiple Asset Sources** implementation, I chose to build a new `embedded` `AssetSource`, which replaces the old `load_interal_asset!` system. The old `load_internal_asset!` approach had a number of issues: * The `AssetServer` was not aware of (or capable of loading) internal assets. * Because internal assets weren't visible to the `AssetServer`, they could not be processed (or used by assets that are processed). This would prevent things "preprocessing shaders that depend on built in Bevy shaders", which is something we desperately need to start doing. * Each "internal asset" needed a UUID to be defined in-code to reference it. This was very manual and toilsome. The new `embedded` `AssetSource` enables the following pattern: ```rust // Called in `crates/bevy_pbr/src/render/mesh.rs` embedded_asset!(app, "mesh.wgsl"); // later in the app let shader: Handle<Shader> = asset_server.load("embedded://bevy_pbr/render/mesh.wgsl"); ``` Notice that this always treats the crate name as the "root path", and it trims out the `src` path for brevity. This is generally predictable, but if you need to debug you can use the new `embedded_path!` macro to get a `PathBuf` that matches the one used by `embedded_asset`. You can also reference embedded assets in arbitrary assets, such as WGSL shaders: ```rust #import "embedded://bevy_pbr/render/mesh.wgsl" ``` This also makes `embedded` assets go through the "normal" asset lifecycle. They are only loaded when they are actually used! We are also discussing implicitly converting asset paths to/from shader modules, so in the future (not in this PR) you might be able to load it like this: ```rust #import bevy_pbr::render::mesh::Vertex ``` Compare that to the old system! ```rust pub const MESH_SHADER_HANDLE: Handle<Shader> = Handle::weak_from_u128(3252377289100772450); load_internal_asset!(app, MESH_SHADER_HANDLE, "mesh.wgsl", Shader::from_wgsl); // The mesh asset is the _only_ accessible via MESH_SHADER_HANDLE and _cannot_ be loaded via the AssetServer. ``` ## Hot Reloading `embedded` You can enable `embedded` hot reloading by enabling the `embedded_watcher` cargo feature: ``` cargo run --features=embedded_watcher ``` ## Improved Hot Reloading Workflow First: the `filesystem_watcher` cargo feature has been renamed to `file_watcher` for brevity (and to match the `FileAssetReader` naming convention). More importantly, hot asset reloading is no longer configured in-code by default. If you enable any asset watcher feature (such as `file_watcher` or `rust_source_watcher`), asset watching will be automatically enabled. This removes the need to _also_ enable hot reloading in your app code. That means you can replace this: ```rust app.add_plugins(DefaultPlugins.set(AssetPlugin::default().watch_for_changes())) ``` with this: ```rust app.add_plugins(DefaultPlugins) ``` If you want to hot reload assets in your app during development, just run your app like this: ``` cargo run --features=file_watcher ``` This means you can use the same code for development and deployment! To deploy an app, just don't include the watcher feature ``` cargo build --release ``` My intent is to move to this approach for pretty much all dev workflows. In a future PR I would like to replace `AssetMode::ProcessedDev` with a `runtime-processor` cargo feature. We could then group all common "dev" cargo features under a single `dev` feature: ```sh # this would enable file_watcher, embedded_watcher, runtime-processor, and more cargo run --features=dev ``` ## AssetMode `AssetPlugin::Unprocessed`, `AssetPlugin::Processed`, and `AssetPlugin::ProcessedDev` have been replaced with an `AssetMode` field on `AssetPlugin`. ```rust // before app.add_plugins(DefaultPlugins.set(AssetPlugin::Processed { /* fields here */ }) // after app.add_plugins(DefaultPlugins.set(AssetPlugin { mode: AssetMode::Processed, ..default() }) ``` This aligns `AssetPlugin` with our other struct-like plugins. The old "source" and "destination" `AssetProvider` fields in the enum variants have been replaced by the "asset source" system. You no longer need to configure the AssetPlugin to "point" to custom asset providers. ## AssetServerMode To improve the implementation of **Multiple Asset Sources**, `AssetServer` was made aware of whether or not it is using "processed" or "unprocessed" assets. You can check that like this: ```rust if asset_server.mode() == AssetServerMode::Processed { /* do something */ } ``` Note that this refactor should also prepare the way for building "one to many processed output files", as it makes the server aware of whether it is loading from processed or unprocessed sources. Meaning we can store and read processed and unprocessed assets differently! ## AssetPath can now refer to folders The "file only" restriction has been removed from `AssetPath`. The `AssetServer::load_folder` API now accepts an `AssetPath` instead of a `Path`, meaning you can load folders from other asset sources! ## Improved AssetPath Parsing AssetPath parsing was reworked to support sources, improve error messages, and to enable parsing with a single pass over the string. `AssetPath::new` was replaced by `AssetPath::parse` and `AssetPath::try_parse`. ## AssetWatcher broken out from AssetReader `AssetReader` is no longer responsible for constructing `AssetWatcher`. This has been moved to `AssetSourceBuilder`. ## Duplicate Event Debouncing Asset V2 already debounced duplicate filesystem events, but this was _input_ events. Multiple input event types can produce the same _output_ `AssetSourceEvent`. Now that we have `embedded_watcher`, which does expensive file io on events, it made sense to debounce output events too, so I added that! This will also benefit the AssetProcessor by preventing integrity checks for duplicate events (and helps keep the noise down in trace logs). ## Next Steps * **Port Built-in Shaders**: Currently the primary (and essentially only) user of `load_interal_asset` in Bevy's source code is "built-in shaders". I chose not to do that in this PR for a few reasons: 1. We need to add the ability to pass shader defs in to shaders via meta files. Some shaders (such as MESH_VIEW_TYPES) need to pass shader def values in that are defined in code. 2. We need to revisit the current shader module naming system. I think we _probably_ want to imply modules from source structure (at least by default). Ideally in a way that can losslessly convert asset paths to/from shader modules (to enable the asset system to resolve modules using the asset server). 3. I want to keep this change set minimal / get this merged first. * **Deprecate `load_internal_asset`**: we can't do that until we do (1) and (2) * **Relative Asset Paths**: This PR significantly increases the need for relative asset paths (which was already pretty high). Currently when loading dependencies, it is assumed to be an absolute path, which means if in an `AssetLoader` you call `context.load("some/path/image.png")` it will assume that is the "default" asset source, _even if the current asset is in a different asset source_. This will cause breakage for AssetLoaders that are not designed to add the current source to whatever paths are being used. AssetLoaders should generally not need to be aware of the name of their current asset source, or need to think about the "current asset source" generally. We should build apis that support relative asset paths and then encourage using relative paths as much as possible (both via api design and docs). Relative paths are also important because they will allow developers to move folders around (even across providers) without reprocessing, provided there is no path breakage.
2023-10-13 23:17:32 +00:00
|file_watcher|Enables watching the filesystem for Bevy Asset hot-reloading|
|flac|FLAC audio format support|
Mark ghost nodes as experimental and partially feature flag them (#15961) # Objective As discussed in #15341, ghost nodes are a contentious and experimental feature. In the interest of enabling ecosystem experimentation, we've decided to keep them in Bevy 0.15. That said, we don't use them internally, and don't expect third-party crates to support them. If the experimentation returns a negative result (they aren't very useful, an alternative design is preferred etc) they will be removed. We should clearly communicate this status to users, and make sure that users don't use ghost nodes in their projects without a very clear understanding of what they're getting themselves into. ## Solution To make life easy for users (and Bevy), `GhostNode` and all associated helpers remain public and are always available. However, actually constructing these requires enabling a feature flag that's clearly marked as experimental. To do so, I've added a meaningless private field. When the feature flag is enabled, our constructs (`new` and `default`) can be used. I've added a `new` constructor, which should be preferred over `Default::default` as that can be readily deprecated, allowing us to prompt users to swap over to the much nicer `GhostNode` syntax once this is a unit struct again. Full credit: this was mostly @cart's design: I'm just implementing it! ## Testing I've run the ghost_nodes example and it fails to compile without the feature flag. With the feature flag, it works fine :) --------- Co-authored-by: Zachary Harrold <zac@harrold.com.au>
2024-10-16 22:20:48 +00:00
|ghost_nodes|Experimental support for nodes that are ignored for UI layouting|
|gif|GIF image format support|
|glam_assert|Enable assertions to check the validity of parameters passed to glam|
|ico|ICO image format support|
|ios_simulator|Enable support for the ios_simulator by downgrading some rendering capabilities|
|jpeg|JPEG image format support|
|meshlet|Enables the meshlet renderer for dense high-poly scenes (experimental)|
|meshlet_processor|Enables processing meshes into meshlet meshes for bevy_pbr|
|minimp3|MP3 audio format support (through minimp3)|
|mp3|MP3 audio format support|
|pbr_anisotropy_texture|Enable support for anisotropy texture in the `StandardMaterial`, at the risk of blowing past the global, per-shader texture limit on older/lower-end GPUs|
Implement clearcoat per the Filament and the `KHR_materials_clearcoat` specifications. (#13031) Clearcoat is a separate material layer that represents a thin translucent layer of a material. Examples include (from the [Filament spec]) car paint, soda cans, and lacquered wood. This commit implements support for clearcoat following the Filament and Khronos specifications, marking the beginnings of support for multiple PBR layers in Bevy. The [`KHR_materials_clearcoat`] specification describes the clearcoat support in glTF. In Blender, applying a clearcoat to the Principled BSDF node causes the clearcoat settings to be exported via this extension. As of this commit, Bevy parses and reads the extension data when present in glTF. Note that the `gltf` crate has no support for `KHR_materials_clearcoat`; this patch therefore implements the JSON semantics manually. Clearcoat is integrated with `StandardMaterial`, but the code is behind a series of `#ifdef`s that only activate when clearcoat is present. Additionally, the `pbr_feature_layer_material_textures` Cargo feature must be active in order to enable support for clearcoat factor maps, clearcoat roughness maps, and clearcoat normal maps. This approach mirrors the same pattern used by the existing transmission feature and exists to avoid running out of texture bindings on platforms like WebGL and WebGPU. Note that constant clearcoat factors and roughness values *are* supported in the browser; only the relatively-less-common maps are disabled on those platforms. This patch refactors the lighting code in `StandardMaterial` significantly in order to better support multiple layers in a natural way. That code was due for a refactor in any case, so this is a nice improvement. A new demo, `clearcoat`, has been added. It's based on [the corresponding three.js demo], but all the assets (aside from the skybox and environment map) are my original work. [Filament spec]: https://google.github.io/filament/Filament.html#materialsystem/clearcoatmodel [`KHR_materials_clearcoat`]: https://github.com/KhronosGroup/glTF/blob/main/extensions/2.0/Khronos/KHR_materials_clearcoat/README.md [the corresponding three.js demo]: https://threejs.org/examples/webgl_materials_physical_clearcoat.html ![Screenshot 2024-04-19 101143](https://github.com/bevyengine/bevy/assets/157897/3444bcb5-5c20-490c-b0ad-53759bd47ae2) ![Screenshot 2024-04-19 102054](https://github.com/bevyengine/bevy/assets/157897/6e953944-75b8-49ef-bc71-97b0a53b3a27) ## Changelog ### Added * `StandardMaterial` now supports a clearcoat layer, which represents a thin translucent layer over an underlying material. * The glTF loader now supports the `KHR_materials_clearcoat` extension, representing materials with clearcoat layers. ## Migration Guide * The lighting functions in the `pbr_lighting` WGSL module now have clearcoat parameters, if `STANDARD_MATERIAL_CLEARCOAT` is defined. * The `R` reflection vector parameter has been removed from some lighting functions, as it was unused.
2024-05-05 22:57:05 +00:00
|pbr_multi_layer_material_textures|Enable support for multi-layer material textures in the `StandardMaterial`, at the risk of blowing past the global, per-shader texture limit on older/lower-end GPUs|
`StandardMaterial` Light Transmission (#8015) # Objective <img width="1920" alt="Screenshot 2023-04-26 at 01 07 34" src="https://user-images.githubusercontent.com/418473/234467578-0f34187b-5863-4ea1-88e9-7a6bb8ce8da3.png"> This PR adds both diffuse and specular light transmission capabilities to the `StandardMaterial`, with support for screen space refractions. This enables realistically representing a wide range of real-world materials, such as: - Glass; (Including frosted glass) - Transparent and translucent plastics; - Various liquids and gels; - Gemstones; - Marble; - Wax; - Paper; - Leaves; - Porcelain. Unlike existing support for transparency, light transmission does not rely on fixed function alpha blending, and therefore works with both `AlphaMode::Opaque` and `AlphaMode::Mask` materials. ## Solution - Introduces a number of transmission related fields in the `StandardMaterial`; - For specular transmission: - Adds logic to take a view main texture snapshot after the opaque phase; (in order to perform screen space refractions) - Introduces a new `Transmissive3d` phase to the renderer, to which all meshes with `transmission > 0.0` materials are sent. - Calculates a light exit point (of the approximate mesh volume) using `ior` and `thickness` properties - Samples the snapshot texture with an adaptive number of taps across a `roughness`-controlled radius enabling “blurry” refractions - For diffuse transmission: - Approximates transmitted diffuse light by using a second, flipped + displaced, diffuse-only Lambertian lobe for each light source. ## To Do - [x] Figure out where `fresnel_mix()` is taking place, if at all, and where `dielectric_specular` is being calculated, if at all, and update them to use the `ior` value (Not a blocker, just a nice-to-have for more correct BSDF) - To the _best of my knowledge, this is now taking place, after 964340cdd. The fresnel mix is actually "split" into two parts in our implementation, one `(1 - fresnel(...))` in the transmission, and `fresnel()` in the light implementations. A surface with more reflectance now will produce slightly dimmer transmission towards the grazing angle, as more of the light gets reflected. - [x] Add `transmission_texture` - [x] Add `diffuse_transmission_texture` - [x] Add `thickness_texture` - [x] Add `attenuation_distance` and `attenuation_color` - [x] Connect values to glTF loader - [x] `transmission` and `transmission_texture` - [x] `thickness` and `thickness_texture` - [x] `ior` - [ ] `diffuse_transmission` and `diffuse_transmission_texture` (needs upstream support in `gltf` crate, not a blocker) - [x] Add support for multiple screen space refraction “steps” - [x] Conditionally create no transmission snapshot texture at all if `steps == 0` - [x] Conditionally enable/disable screen space refraction transmission snapshots - [x] Read from depth pre-pass to prevent refracting pixels in front of the light exit point - [x] Use `interleaved_gradient_noise()` function for sampling blur in a way that benefits from TAA - [x] Drill down a TAA `#define`, tweak some aspects of the effect conditionally based on it - [x] Remove const array that's crashing under HLSL (unless a new `naga` release with https://github.com/gfx-rs/naga/pull/2496 comes out before we merge this) - [ ] Look into alternatives to the `switch` hack for dynamically indexing the const array (might not be needed, compilers seem to be decent at expanding it) - [ ] Add pipeline keys for gating transmission (do we really want/need this?) - [x] Tweak some material field/function names? ## A Note on Texture Packing _This was originally added as a comment to the `specular_transmission_texture`, `thickness_texture` and `diffuse_transmission_texture` documentation, I removed it since it was more confusing than helpful, and will likely be made redundant/will need to be updated once we have a better infrastructure for preprocessing assets_ Due to how channels are mapped, you can more efficiently use a single shared texture image for configuring the following: - R - `specular_transmission_texture` - G - `thickness_texture` - B - _unused_ - A - `diffuse_transmission_texture` The `KHR_materials_diffuse_transmission` glTF extension also defines a `diffuseTransmissionColorTexture`, that _we don't currently support_. One might choose to pack the intensity and color textures together, using RGB for the color and A for the intensity, in which case this packing advice doesn't really apply. --- ## Changelog - Added a new `Transmissive3d` render phase for rendering specular transmissive materials with screen space refractions - Added rendering support for transmitted environment map light on the `StandardMaterial` as a fallback for screen space refractions - Added `diffuse_transmission`, `specular_transmission`, `thickness`, `ior`, `attenuation_distance` and `attenuation_color` to the `StandardMaterial` - Added `diffuse_transmission_texture`, `specular_transmission_texture`, `thickness_texture` to the `StandardMaterial`, gated behind a new `pbr_transmission_textures` cargo feature (off by default, for maximum hardware compatibility) - Added `Camera3d::screen_space_specular_transmission_steps` for controlling the number of “layers of transparency” rendered for transmissive objects - Added a `TransmittedShadowReceiver` component for enabling shadows in (diffusely) transmitted light. (disabled by default, as it requires carefully setting up the `thickness` to avoid self-shadow artifacts) - Added support for the `KHR_materials_transmission`, `KHR_materials_ior` and `KHR_materials_volume` glTF extensions - Renamed items related to temporal jitter for greater consistency ## Migration Guide - `SsaoPipelineKey::temporal_noise` has been renamed to `SsaoPipelineKey::temporal_jitter` - The `TAA` shader def (controlled by the presence of the `TemporalAntiAliasSettings` component in the camera) has been replaced with the `TEMPORAL_JITTER` shader def (controlled by the presence of the `TemporalJitter` component in the camera) - `MeshPipelineKey::TAA` has been replaced by `MeshPipelineKey::TEMPORAL_JITTER` - The `TEMPORAL_NOISE` shader def has been consolidated with `TEMPORAL_JITTER`
2023-10-31 20:59:02 +00:00
|pbr_transmission_textures|Enable support for transmission-related textures in the `StandardMaterial`, at the risk of blowing past the global, per-shader texture limit on older/lower-end GPUs|
|pnm|PNM image format support, includes pam, pbm, pgm and ppm|
|qoi|QOI image format support|
|reflect_functions|Enable function reflection|
|serialize|Enable serialization support through serde|
|shader_format_glsl|Enable support for shaders in GLSL|
|shader_format_spirv|Enable support for shaders in SPIR-V|
Spirv passthrough main (adopted, part deux) (#15352) **Note:** This is an adoption of @Shfty 's adoption (#8131) of #3996! All I've done is updated the branch and run the docs CI. > **Note:** This is an adoption of #3996, originally authored by @molikto > > # Objective > Allow use of `wgpu::Features::SPIRV_SHADER_PASSTHROUGH` and the corresponding `wgpu::Device::create_shader_module_spirv` for SPIR-V shader assets. > > This enables use-cases where naga is not sufficient to load a given (valid) SPIR-V module, i.e. cases where naga lacks support for a given SPIR-V feature employed by a third-party codegen backend like `rust-gpu`. > > ## Solution > * Reimplemented the changes from [Spirv shader bypass #3996](https://github.com/bevyengine/bevy/pull/3996), on account of the original branch having been deleted. > * Documented the new `spirv_shader_passthrough` feature flag with the appropriate platform support context from [wgpu's documentation](https://docs.rs/wgpu/latest/wgpu/struct.Features.html#associatedconstant.SPIRV_SHADER_PASSTHROUGH). > > ## Changelog > * Adds a `spirv_shader_passthrough` feature flag to the following crates: > > * `bevy` > * `bevy_internal` > * `bevy_render` > * Extends `RenderDevice::create_shader_module` with a conditional call to `wgpu::Device::create_shader_module_spirv` if `spirv_shader_passthrough` is enabled and `wgpu::Features::SPIRV_SHADER_PASSTHROUGH` is present for the current platform. > * Documents the relevant `wgpu` platform support in `docs/cargo_features.md` --------- Co-authored-by: Josh Palmer <1253239+Shfty@users.noreply.github.com> Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
2024-09-22 14:51:14 +00:00
|spirv_shader_passthrough|Enable passthrough loading for SPIR-V shaders (Only supported on Vulkan, shader capabilities and extensions must agree with the platform implementation)|
|symphonia-aac|AAC audio format support (through symphonia)|
|symphonia-all|AAC, FLAC, MP3, MP4, OGG/VORBIS, and WAV audio formats support (through symphonia)|
|symphonia-flac|FLAC audio format support (through symphonia)|
|symphonia-isomp4|MP4 audio format support (through symphonia)|
|symphonia-vorbis|OGG/VORBIS audio format support (through symphonia)|
|symphonia-wav|WAV audio format support (through symphonia)|
|tga|TGA image format support|
|tiff|TIFF image format support|
|trace|Tracing support|
|trace_chrome|Tracing support, saving a file in Chrome Tracing format|
|trace_tracy|Tracing support, exposing a port for Tracy|
|trace_tracy_memory|Tracing support, with memory profiling, exposing a port for Tracy|
Track source location in change detection (#14034) # Objective - Make it possible to know *what* changed your component or resource. - Common need when debugging, when you want to know the last code location that mutated a value in the ECS. - This feature would be very useful for the editor alongside system stepping. ## Solution - Adds the caller location to column data. - Mutations now `track_caller` all the way up to the public API. - Commands that invoke these functions immediately call `Location::caller`, and pass this into the functions, instead of the functions themselves attempting to get the caller. This would not work for commands which are deferred, as the commands are executed by the scheduler, not the user's code. ## Testing - The `component_change_detection` example now shows where the component was mutated: ``` 2024-07-28T06:57:48.946022Z INFO component_change_detection: Entity { index: 1, generation: 1 }: New value: MyComponent(0.0) 2024-07-28T06:57:49.004371Z INFO component_change_detection: Entity { index: 1, generation: 1 }: New value: MyComponent(1.0) 2024-07-28T06:57:49.012738Z WARN component_change_detection: Change detected! -> value: Ref(MyComponent(1.0)) -> added: false -> changed: true -> changed by: examples/ecs/component_change_detection.rs:36:23 ``` - It's also possible to inspect change location from a debugger: <img width="608" alt="image" src="https://github.com/user-attachments/assets/c90ecc7a-0462-457a-80ae-42e7f5d346b4"> --- ## Changelog - Added source locations to ECS change detection behind the `track_change_detection` flag. ## Migration Guide - Added `changed_by` field to many internal ECS functions used with change detection when the `track_change_detection` feature flag is enabled. Use Location::caller() to provide the source of the function call. --------- Co-authored-by: BD103 <59022059+BD103@users.noreply.github.com> Co-authored-by: Gino Valente <49806985+MrGVSV@users.noreply.github.com>
2024-07-30 12:02:38 +00:00
|track_change_detection|Enables source location tracking for change detection, which can assist with debugging|
|wav|WAV audio format support|
|wayland|Wayland display server support|
|webgpu|Enable support for WebGPU in Wasm. When enabled, this feature will override the `webgl2` feature and you won't be able to run Wasm builds with WebGL2, only with WebGPU.|
Added `WebP` image format support (#8220) # Objective WebP is a modern image format developed by Google that offers a significant reduction in file size compared to other image formats such as PNG and JPEG, while still maintaining good image quality. This makes it particularly useful for games with large numbers of images, such as those with high-quality textures or detailed sprites, where file size and loading times can have a significant impact on performance. By adding support for WebP images in Bevy, game developers using this engine can now take advantage of this modern image format and reduce the memory usage and loading times of their games. This improvement can ultimately result in a better gaming experience for players. In summary, the objective of adding WebP image format support in Bevy is to enable game developers to use a modern image format that provides better compression rates and smaller file sizes, resulting in faster loading times and reduced memory usage for their games. ## Solution To add support for WebP images in Bevy, this pull request leverages the existing `image` crate support for WebP. This implementation is easily integrated into the existing Bevy asset-loading system. To maintain compatibility with existing Bevy projects, WebP image support is disabled by default, and developers can enable it by adding a feature flag to their project's `Cargo.toml` file. With this feature, Bevy becomes even more versatile for game developers and provides a valuable addition to the game engine. --- ## Changelog - Added support for WebP image format in Bevy game engine ## Migration Guide To enable WebP image support in your Bevy project, add the following line to your project's Cargo.toml file: ```toml bevy = { version = "*", features = ["webp"]} ```
2023-03-28 19:53:55 +00:00
|webp|WebP image format support|
|zlib|For KTX2 supercompression|