2023-02-28 14:24:47 +00:00
<!-- MD041 - This file will be included in docs and should not start with a top header -->
<!-- markdownlint - disable - file MD041 -->
2020-08-25 00:06:08 +00:00
2023-02-28 14:24:47 +00:00
## Cargo Features
Bevy exposes many features to customise the engine. Enabling them add functionalities but often come at the cost of longer compilation times and extra dependencies.
### Default Features
The default feature set enables most of the expected features of a game engine, like rendering in both 2D and 3D, asset loading, audio and UI. To help reduce compilation time, consider disabling default features and enabling only those you need.
2020-08-25 00:06:08 +00:00
2021-02-01 04:19:10 +00:00
|feature name|description|
|-|-|
2023-02-28 14:24:47 +00:00
|android_shared_stdcxx|Enable using a shared stdlib for cxx on Android|
|animation|Enable animation support, and glTF animation loading|
|bevy_animation|Provides animation functionality|
|bevy_asset|Provides asset functionality|
|bevy_audio|Provides audio functionality|
2024-02-27 17:04:56 +00:00
|bevy_color|Provides shared color types and operations|
2023-02-28 14:24:47 +00:00
|bevy_core_pipeline|Provides cameras and other basic render pipeline features|
|bevy_gilrs|Adds gamepad support|
2023-03-20 20:57:54 +00:00
|bevy_gizmos|Adds support for rendering gizmos|
2023-02-28 14:24:47 +00:00
|bevy_gltf|[glTF](https://www.khronos.org/gltf/) support|
|bevy_pbr|Adds PBR rendering|
2024-06-15 11:59:57 +00:00
|bevy_picking|Provides picking functionality|
2023-02-28 14:24:47 +00:00
|bevy_render|Provides rendering functionality|
|bevy_scene|Provides scene functionality|
|bevy_sprite|Provides sprite functionality|
Separate state crate (#13216)
# Objective
Extracts the state mechanisms into a new crate called "bevy_state".
This comes with a few goals:
- state wasn't really an inherent machinery of the ecs system, and so
keeping it within bevy_ecs felt forced
- by mixing it in with bevy_ecs, the maintainability of our more robust
state system was significantly compromised
moving state into a new crate makes it easier to encapsulate as it's own
feature, and easier to read and understand since it's no longer a
single, massive file.
## Solution
move the state-related elements from bevy_ecs to a new crate
## Testing
- Did you test these changes? If so, how? all the automated tests
migrated and passed, ran the pre-existing examples without changes to
validate.
---
## Migration Guide
Since bevy_state is now gated behind the `bevy_state` feature, projects
that use state but don't use the `default-features` will need to add
that feature flag.
Since it is no longer part of bevy_ecs, projects that use bevy_ecs
directly will need to manually pull in `bevy_state`, trigger the
StateTransition schedule, and handle any of the elements that bevy_app
currently sets up.
---------
Co-authored-by: Kristoffer Søholm <k.soeholm@gmail.com>
2024-05-09 18:06:05 +00:00
|bevy_state|Enable built in global state machines|
2023-02-28 14:24:47 +00:00
|bevy_text|Provides text functionality|
|bevy_ui|A custom ECS-driven UI framework|
|bevy_winit|winit window and input backend|
2023-04-21 22:30:18 +00:00
|default_font|Include a default font, containing only ASCII characters, at the cost of a 20kB binary size increase|
2023-02-28 14:24:47 +00:00
|hdr|HDR image format support|
|ktx2|KTX2 compressed texture support|
2024-05-06 20:49:32 +00:00
|multi_threaded|Enables multithreaded parallelism in the engine. Disabling it forces all engine tasks to run on a single thread.|
2023-02-28 14:24:47 +00:00
|png|PNG image format support|
2024-06-26 03:08:23 +00:00
|smaa_luts|Include SMAA Look Up Tables KTX2 Files|
2024-02-28 20:00:42 +00:00
|sysinfo_plugin|Enables system information diagnostic plugin|
Log a warning when the `tonemapping_luts` feature is disabled but required for the selected tonemapper. (#10253)
# Objective
Make it obvious why stuff renders pink when rendering stuff with bevy
with `default_features = false` and bevy's default tonemapper
(TonyMcMapFace, it requires a LUT which requires the `tonemapping_luts`,
`ktx2`, and `zstd` features).
Not sure if this should be considered as fixing these issues, but in my
previous PR (https://github.com/bevyengine/bevy/pull/9073, and old
discussions on discord that I only somewhat remember) it seemed like we
didn't want to make ktx2 and zstd required features for
bevy_core_pipeline.
Related https://github.com/bevyengine/bevy/issues/9179
Related https://github.com/bevyengine/bevy/issues/9098
## Solution
This logs an error when a LUT based tonemapper is used without the
`tonemapping_luts` feature enabled, and cleans up the default features a
bit (`tonemapping_luts` now includes the `ktx2` and `zstd` features,
since it panics without them).
Another solution would be to fall back to a non-lut based tonemapper,
but I don't like this solution as then it's not explicitly clear to
users why eg. a library example renders differently than a normal bevy
app (if the library forgot the `tonemapping_luts` feature).
I did remove the `ktx2` and `zstd` features from the list of default
features in Cargo.toml, as I don't believe anything else currently in
bevy relies on them (or at least searching through every hit for `ktx2`
and `zstd` didn't show anything except loading an environment map in
some examples), and they still show up in the `cargo_features` doc as
default features.
---
## Changelog
- The `tonemapping_luts` feature now includes both the `ktx2` and `zstd`
features to avoid a panic when the `tonemapping_luts` feature was enable
without both the `ktx2` and `zstd` feature enabled.
2023-10-27 02:07:24 +00:00
|tonemapping_luts|Include tonemapping Look Up Tables KTX2 files. If everything is pink, you need to enable this feature or change the `Tonemapping` method on your `Camera2dBundle` or `Camera3dBundle` .|
2023-02-28 14:24:47 +00:00
|vorbis|OGG/VORBIS audio format support|
Update to wgpu 0.19 and raw-window-handle 0.6 (#11280)
# Objective
Keep core dependencies up to date.
## Solution
Update the dependencies.
wgpu 0.19 only supports raw-window-handle (rwh) 0.6, so bumping that was
included in this.
The rwh 0.6 version bump is just the simplest way of doing it. There
might be a way we can take advantage of wgpu's new safe surface creation
api, but I'm not familiar enough with bevy's window management to
untangle it and my attempt ended up being a mess of lifetimes and rustc
complaining about missing trait impls (that were implemented). Thanks to
@MiniaczQ for the (much simpler) rwh 0.6 version bump code.
Unblocks https://github.com/bevyengine/bevy/pull/9172 and
https://github.com/bevyengine/bevy/pull/10812
~~This might be blocked on cpal and oboe updating their ndk versions to
0.8, as they both currently target ndk 0.7 which uses rwh 0.5.2~~ Tested
on android, and everything seems to work correctly (audio properly stops
when minimized, and plays when re-focusing the app).
---
## Changelog
- `wgpu` has been updated to 0.19! The long awaited arcanization has
been merged (for more info, see
https://gfx-rs.github.io/2023/11/24/arcanization.html), and Vulkan
should now be working again on Intel GPUs.
- Targeting WebGPU now requires that you add the new `webgpu` feature
(setting the `RUSTFLAGS` environment variable to
`--cfg=web_sys_unstable_apis` is still required). This feature currently
overrides the `webgl2` feature if you have both enabled (the `webgl2`
feature is enabled by default), so it is not recommended to add it as a
default feature to libraries without putting it behind a flag that
allows library users to opt out of it! In the future we plan on
supporting wasm binaries that can target both webgl2 and webgpu now that
wgpu added support for doing so (see
https://github.com/bevyengine/bevy/issues/11505).
- `raw-window-handle` has been updated to version 0.6.
## Migration Guide
- `bevy_render::instance_index::get_instance_index()` has been removed
as the webgl2 workaround is no longer required as it was fixed upstream
in wgpu. The `BASE_INSTANCE_WORKAROUND` shaderdef has also been removed.
- WebGPU now requires the new `webgpu` feature to be enabled. The
`webgpu` feature currently overrides the `webgl2` feature so you no
longer need to disable all default features and re-add them all when
targeting `webgpu`, but binaries built with both the `webgpu` and
`webgl2` features will only target the webgpu backend, and will only
work on browsers that support WebGPU.
- Places where you conditionally compiled things for webgl2 need to be
updated because of this change, eg:
- `#[cfg(any(not(feature = "webgl"), not(target_arch = "wasm32")))]`
becomes `#[cfg(any(not(feature = "webgl") ,not(target_arch = "wasm32"),
feature = "webgpu"))]`
- `#[cfg(all(feature = "webgl", target_arch = "wasm32"))]` becomes
`#[cfg(all(feature = "webgl", target_arch = "wasm32", not(feature =
"webgpu")))]`
- `if cfg!(all(feature = "webgl", target_arch = "wasm32"))` becomes `if
cfg!(all(feature = "webgl", target_arch = "wasm32", not(feature =
"webgpu")))`
- `create_texture_with_data` now also takes a `TextureDataOrder`. You
can probably just set this to `TextureDataOrder::default()`
- `TextureFormat`'s `block_size` has been renamed to `block_copy_size`
- See the `wgpu` changelog for anything I might've missed:
https://github.com/gfx-rs/wgpu/blob/trunk/CHANGELOG.md
---------
Co-authored-by: François <mockersf@gmail.com>
2024-01-26 18:14:21 +00:00
|webgl2|Enable some limitations to be able to use WebGL2. Please refer to the [WebGL2 and WebGPU ](https://github.com/bevyengine/bevy/tree/latest/examples#webgl2-and-webgpu ) section of the examples README for more information on how to run Wasm builds with WebGPU.|
2023-02-28 14:24:47 +00:00
|x11|X11 display server support|
|zstd|For KTX2 supercompression|
2020-08-25 00:06:08 +00:00
2023-02-28 14:24:47 +00:00
### Optional Features
2020-08-25 00:06:08 +00:00
2021-02-01 04:19:10 +00:00
|feature name|description|
|-|-|
2023-03-01 22:45:04 +00:00
|accesskit_unix|Enable AccessKit on Unix backends (currently only works with experimental screen readers and forks.)|
2023-10-20 20:50:26 +00:00
|asset_processor|Enables the built-in asset processor for processed assets.|
2023-09-25 19:59:50 +00:00
|async-io|Use async-io's implementation of block_on instead of futures-lite's implementation. This is preferred if your application uses async-io.|
2023-02-28 14:24:47 +00:00
|basis-universal|Basis Universal compressed texture support|
|bevy_ci_testing|Enable systems that allow for automated testing on CI|
2024-04-03 19:16:02 +00:00
|bevy_debug_stepping|Enable stepping-based debugging of Bevy systems|
2024-03-06 20:33:05 +00:00
|bevy_dev_tools|Provides a collection of developer tools|
2023-02-28 14:24:47 +00:00
|bmp|BMP image format support|
|dds|DDS compressed texture support|
2023-10-22 23:01:28 +00:00
|debug_glam_assert|Enable assertions in debug builds to check the validity of parameters passed to glam|
2023-02-28 14:24:47 +00:00
|detailed_trace|Enable detailed trace event logging. These trace events are expensive even when off, thus they require compile time opt-in|
|dynamic_linking|Force dynamic linking, which improves iterative compile times|
Multiple Asset Sources (#9885)
This adds support for **Multiple Asset Sources**. You can now register a
named `AssetSource`, which you can load assets from like you normally
would:
```rust
let shader: Handle<Shader> = asset_server.load("custom_source://path/to/shader.wgsl");
```
Notice that `AssetPath` now supports `some_source://` syntax. This can
now be accessed through the `asset_path.source()` accessor.
Asset source names _are not required_. If one is not specified, the
default asset source will be used:
```rust
let shader: Handle<Shader> = asset_server.load("path/to/shader.wgsl");
```
The behavior of the default asset source has not changed. Ex: the
`assets` folder is still the default.
As referenced in #9714
## Why?
**Multiple Asset Sources** enables a number of often-asked-for
scenarios:
* **Loading some assets from other locations on disk**: you could create
a `config` asset source that reads from the OS-default config folder
(not implemented in this PR)
* **Loading some assets from a remote server**: you could register a new
`remote` asset source that reads some assets from a remote http server
(not implemented in this PR)
* **Improved "Binary Embedded" Assets**: we can use this system for
"embedded-in-binary assets", which allows us to replace the old
`load_internal_asset!` approach, which couldn't support asset
processing, didn't support hot-reloading _well_, and didn't make
embedded assets accessible to the `AssetServer` (implemented in this pr)
## Adding New Asset Sources
An `AssetSource` is "just" a collection of `AssetReader`, `AssetWriter`,
and `AssetWatcher` entries. You can configure new asset sources like
this:
```rust
app.register_asset_source(
"other",
AssetSource::build()
.with_reader(|| Box::new(FileAssetReader::new("other")))
)
)
```
Note that `AssetSource` construction _must_ be repeatable, which is why
a closure is accepted.
`AssetSourceBuilder` supports `with_reader`, `with_writer`,
`with_watcher`, `with_processed_reader`, `with_processed_writer`, and
`with_processed_watcher`.
Note that the "asset source" system replaces the old "asset providers"
system.
## Processing Multiple Sources
The `AssetProcessor` now supports multiple asset sources! Processed
assets can refer to assets in other sources and everything "just works".
Each `AssetSource` defines an unprocessed and processed `AssetReader` /
`AssetWriter`.
Currently this is all or nothing for a given `AssetSource`. A given
source is either processed or it is not. Later we might want to add
support for "lazy asset processing", where an `AssetSource` (such as a
remote server) can be configured to only process assets that are
directly referenced by local assets (in order to save local disk space
and avoid doing extra work).
## A new `AssetSource`: `embedded`
One of the big features motivating **Multiple Asset Sources** was
improving our "embedded-in-binary" asset loading. To prove out the
**Multiple Asset Sources** implementation, I chose to build a new
`embedded` `AssetSource`, which replaces the old `load_interal_asset!`
system.
The old `load_internal_asset!` approach had a number of issues:
* The `AssetServer` was not aware of (or capable of loading) internal
assets.
* Because internal assets weren't visible to the `AssetServer`, they
could not be processed (or used by assets that are processed). This
would prevent things "preprocessing shaders that depend on built in Bevy
shaders", which is something we desperately need to start doing.
* Each "internal asset" needed a UUID to be defined in-code to reference
it. This was very manual and toilsome.
The new `embedded` `AssetSource` enables the following pattern:
```rust
// Called in `crates/bevy_pbr/src/render/mesh.rs`
embedded_asset!(app, "mesh.wgsl");
// later in the app
let shader: Handle<Shader> = asset_server.load("embedded://bevy_pbr/render/mesh.wgsl");
```
Notice that this always treats the crate name as the "root path", and it
trims out the `src` path for brevity. This is generally predictable, but
if you need to debug you can use the new `embedded_path!` macro to get a
`PathBuf` that matches the one used by `embedded_asset`.
You can also reference embedded assets in arbitrary assets, such as WGSL
shaders:
```rust
#import "embedded://bevy_pbr/render/mesh.wgsl"
```
This also makes `embedded` assets go through the "normal" asset
lifecycle. They are only loaded when they are actually used!
We are also discussing implicitly converting asset paths to/from shader
modules, so in the future (not in this PR) you might be able to load it
like this:
```rust
#import bevy_pbr::render::mesh::Vertex
```
Compare that to the old system!
```rust
pub const MESH_SHADER_HANDLE: Handle<Shader> = Handle::weak_from_u128(3252377289100772450);
load_internal_asset!(app, MESH_SHADER_HANDLE, "mesh.wgsl", Shader::from_wgsl);
// The mesh asset is the _only_ accessible via MESH_SHADER_HANDLE and _cannot_ be loaded via the AssetServer.
```
## Hot Reloading `embedded`
You can enable `embedded` hot reloading by enabling the
`embedded_watcher` cargo feature:
```
cargo run --features=embedded_watcher
```
## Improved Hot Reloading Workflow
First: the `filesystem_watcher` cargo feature has been renamed to
`file_watcher` for brevity (and to match the `FileAssetReader` naming
convention).
More importantly, hot asset reloading is no longer configured in-code by
default. If you enable any asset watcher feature (such as `file_watcher`
or `rust_source_watcher`), asset watching will be automatically enabled.
This removes the need to _also_ enable hot reloading in your app code.
That means you can replace this:
```rust
app.add_plugins(DefaultPlugins.set(AssetPlugin::default().watch_for_changes()))
```
with this:
```rust
app.add_plugins(DefaultPlugins)
```
If you want to hot reload assets in your app during development, just
run your app like this:
```
cargo run --features=file_watcher
```
This means you can use the same code for development and deployment! To
deploy an app, just don't include the watcher feature
```
cargo build --release
```
My intent is to move to this approach for pretty much all dev workflows.
In a future PR I would like to replace `AssetMode::ProcessedDev` with a
`runtime-processor` cargo feature. We could then group all common "dev"
cargo features under a single `dev` feature:
```sh
# this would enable file_watcher, embedded_watcher, runtime-processor, and more
cargo run --features=dev
```
## AssetMode
`AssetPlugin::Unprocessed`, `AssetPlugin::Processed`, and
`AssetPlugin::ProcessedDev` have been replaced with an `AssetMode` field
on `AssetPlugin`.
```rust
// before
app.add_plugins(DefaultPlugins.set(AssetPlugin::Processed { /* fields here */ })
// after
app.add_plugins(DefaultPlugins.set(AssetPlugin { mode: AssetMode::Processed, ..default() })
```
This aligns `AssetPlugin` with our other struct-like plugins. The old
"source" and "destination" `AssetProvider` fields in the enum variants
have been replaced by the "asset source" system. You no longer need to
configure the AssetPlugin to "point" to custom asset providers.
## AssetServerMode
To improve the implementation of **Multiple Asset Sources**,
`AssetServer` was made aware of whether or not it is using "processed"
or "unprocessed" assets. You can check that like this:
```rust
if asset_server.mode() == AssetServerMode::Processed {
/* do something */
}
```
Note that this refactor should also prepare the way for building "one to
many processed output files", as it makes the server aware of whether it
is loading from processed or unprocessed sources. Meaning we can store
and read processed and unprocessed assets differently!
## AssetPath can now refer to folders
The "file only" restriction has been removed from `AssetPath`. The
`AssetServer::load_folder` API now accepts an `AssetPath` instead of a
`Path`, meaning you can load folders from other asset sources!
## Improved AssetPath Parsing
AssetPath parsing was reworked to support sources, improve error
messages, and to enable parsing with a single pass over the string.
`AssetPath::new` was replaced by `AssetPath::parse` and
`AssetPath::try_parse`.
## AssetWatcher broken out from AssetReader
`AssetReader` is no longer responsible for constructing `AssetWatcher`.
This has been moved to `AssetSourceBuilder`.
## Duplicate Event Debouncing
Asset V2 already debounced duplicate filesystem events, but this was
_input_ events. Multiple input event types can produce the same _output_
`AssetSourceEvent`. Now that we have `embedded_watcher`, which does
expensive file io on events, it made sense to debounce output events
too, so I added that! This will also benefit the AssetProcessor by
preventing integrity checks for duplicate events (and helps keep the
noise down in trace logs).
## Next Steps
* **Port Built-in Shaders**: Currently the primary (and essentially
only) user of `load_interal_asset` in Bevy's source code is "built-in
shaders". I chose not to do that in this PR for a few reasons:
1. We need to add the ability to pass shader defs in to shaders via meta
files. Some shaders (such as MESH_VIEW_TYPES) need to pass shader def
values in that are defined in code.
2. We need to revisit the current shader module naming system. I think
we _probably_ want to imply modules from source structure (at least by
default). Ideally in a way that can losslessly convert asset paths
to/from shader modules (to enable the asset system to resolve modules
using the asset server).
3. I want to keep this change set minimal / get this merged first.
* **Deprecate `load_internal_asset`**: we can't do that until we do (1)
and (2)
* **Relative Asset Paths**: This PR significantly increases the need for
relative asset paths (which was already pretty high). Currently when
loading dependencies, it is assumed to be an absolute path, which means
if in an `AssetLoader` you call `context.load("some/path/image.png")` it
will assume that is the "default" asset source, _even if the current
asset is in a different asset source_. This will cause breakage for
AssetLoaders that are not designed to add the current source to whatever
paths are being used. AssetLoaders should generally not need to be aware
of the name of their current asset source, or need to think about the
"current asset source" generally. We should build apis that support
relative asset paths and then encourage using relative paths as much as
possible (both via api design and docs). Relative paths are also
important because they will allow developers to move folders around
(even across providers) without reprocessing, provided there is no path
breakage.
2023-10-13 23:17:32 +00:00
|embedded_watcher|Enables watching in memory asset providers for Bevy Asset hot-reloading|
2023-02-28 14:24:47 +00:00
|exr|EXR image format support|
Multiple Asset Sources (#9885)
This adds support for **Multiple Asset Sources**. You can now register a
named `AssetSource`, which you can load assets from like you normally
would:
```rust
let shader: Handle<Shader> = asset_server.load("custom_source://path/to/shader.wgsl");
```
Notice that `AssetPath` now supports `some_source://` syntax. This can
now be accessed through the `asset_path.source()` accessor.
Asset source names _are not required_. If one is not specified, the
default asset source will be used:
```rust
let shader: Handle<Shader> = asset_server.load("path/to/shader.wgsl");
```
The behavior of the default asset source has not changed. Ex: the
`assets` folder is still the default.
As referenced in #9714
## Why?
**Multiple Asset Sources** enables a number of often-asked-for
scenarios:
* **Loading some assets from other locations on disk**: you could create
a `config` asset source that reads from the OS-default config folder
(not implemented in this PR)
* **Loading some assets from a remote server**: you could register a new
`remote` asset source that reads some assets from a remote http server
(not implemented in this PR)
* **Improved "Binary Embedded" Assets**: we can use this system for
"embedded-in-binary assets", which allows us to replace the old
`load_internal_asset!` approach, which couldn't support asset
processing, didn't support hot-reloading _well_, and didn't make
embedded assets accessible to the `AssetServer` (implemented in this pr)
## Adding New Asset Sources
An `AssetSource` is "just" a collection of `AssetReader`, `AssetWriter`,
and `AssetWatcher` entries. You can configure new asset sources like
this:
```rust
app.register_asset_source(
"other",
AssetSource::build()
.with_reader(|| Box::new(FileAssetReader::new("other")))
)
)
```
Note that `AssetSource` construction _must_ be repeatable, which is why
a closure is accepted.
`AssetSourceBuilder` supports `with_reader`, `with_writer`,
`with_watcher`, `with_processed_reader`, `with_processed_writer`, and
`with_processed_watcher`.
Note that the "asset source" system replaces the old "asset providers"
system.
## Processing Multiple Sources
The `AssetProcessor` now supports multiple asset sources! Processed
assets can refer to assets in other sources and everything "just works".
Each `AssetSource` defines an unprocessed and processed `AssetReader` /
`AssetWriter`.
Currently this is all or nothing for a given `AssetSource`. A given
source is either processed or it is not. Later we might want to add
support for "lazy asset processing", where an `AssetSource` (such as a
remote server) can be configured to only process assets that are
directly referenced by local assets (in order to save local disk space
and avoid doing extra work).
## A new `AssetSource`: `embedded`
One of the big features motivating **Multiple Asset Sources** was
improving our "embedded-in-binary" asset loading. To prove out the
**Multiple Asset Sources** implementation, I chose to build a new
`embedded` `AssetSource`, which replaces the old `load_interal_asset!`
system.
The old `load_internal_asset!` approach had a number of issues:
* The `AssetServer` was not aware of (or capable of loading) internal
assets.
* Because internal assets weren't visible to the `AssetServer`, they
could not be processed (or used by assets that are processed). This
would prevent things "preprocessing shaders that depend on built in Bevy
shaders", which is something we desperately need to start doing.
* Each "internal asset" needed a UUID to be defined in-code to reference
it. This was very manual and toilsome.
The new `embedded` `AssetSource` enables the following pattern:
```rust
// Called in `crates/bevy_pbr/src/render/mesh.rs`
embedded_asset!(app, "mesh.wgsl");
// later in the app
let shader: Handle<Shader> = asset_server.load("embedded://bevy_pbr/render/mesh.wgsl");
```
Notice that this always treats the crate name as the "root path", and it
trims out the `src` path for brevity. This is generally predictable, but
if you need to debug you can use the new `embedded_path!` macro to get a
`PathBuf` that matches the one used by `embedded_asset`.
You can also reference embedded assets in arbitrary assets, such as WGSL
shaders:
```rust
#import "embedded://bevy_pbr/render/mesh.wgsl"
```
This also makes `embedded` assets go through the "normal" asset
lifecycle. They are only loaded when they are actually used!
We are also discussing implicitly converting asset paths to/from shader
modules, so in the future (not in this PR) you might be able to load it
like this:
```rust
#import bevy_pbr::render::mesh::Vertex
```
Compare that to the old system!
```rust
pub const MESH_SHADER_HANDLE: Handle<Shader> = Handle::weak_from_u128(3252377289100772450);
load_internal_asset!(app, MESH_SHADER_HANDLE, "mesh.wgsl", Shader::from_wgsl);
// The mesh asset is the _only_ accessible via MESH_SHADER_HANDLE and _cannot_ be loaded via the AssetServer.
```
## Hot Reloading `embedded`
You can enable `embedded` hot reloading by enabling the
`embedded_watcher` cargo feature:
```
cargo run --features=embedded_watcher
```
## Improved Hot Reloading Workflow
First: the `filesystem_watcher` cargo feature has been renamed to
`file_watcher` for brevity (and to match the `FileAssetReader` naming
convention).
More importantly, hot asset reloading is no longer configured in-code by
default. If you enable any asset watcher feature (such as `file_watcher`
or `rust_source_watcher`), asset watching will be automatically enabled.
This removes the need to _also_ enable hot reloading in your app code.
That means you can replace this:
```rust
app.add_plugins(DefaultPlugins.set(AssetPlugin::default().watch_for_changes()))
```
with this:
```rust
app.add_plugins(DefaultPlugins)
```
If you want to hot reload assets in your app during development, just
run your app like this:
```
cargo run --features=file_watcher
```
This means you can use the same code for development and deployment! To
deploy an app, just don't include the watcher feature
```
cargo build --release
```
My intent is to move to this approach for pretty much all dev workflows.
In a future PR I would like to replace `AssetMode::ProcessedDev` with a
`runtime-processor` cargo feature. We could then group all common "dev"
cargo features under a single `dev` feature:
```sh
# this would enable file_watcher, embedded_watcher, runtime-processor, and more
cargo run --features=dev
```
## AssetMode
`AssetPlugin::Unprocessed`, `AssetPlugin::Processed`, and
`AssetPlugin::ProcessedDev` have been replaced with an `AssetMode` field
on `AssetPlugin`.
```rust
// before
app.add_plugins(DefaultPlugins.set(AssetPlugin::Processed { /* fields here */ })
// after
app.add_plugins(DefaultPlugins.set(AssetPlugin { mode: AssetMode::Processed, ..default() })
```
This aligns `AssetPlugin` with our other struct-like plugins. The old
"source" and "destination" `AssetProvider` fields in the enum variants
have been replaced by the "asset source" system. You no longer need to
configure the AssetPlugin to "point" to custom asset providers.
## AssetServerMode
To improve the implementation of **Multiple Asset Sources**,
`AssetServer` was made aware of whether or not it is using "processed"
or "unprocessed" assets. You can check that like this:
```rust
if asset_server.mode() == AssetServerMode::Processed {
/* do something */
}
```
Note that this refactor should also prepare the way for building "one to
many processed output files", as it makes the server aware of whether it
is loading from processed or unprocessed sources. Meaning we can store
and read processed and unprocessed assets differently!
## AssetPath can now refer to folders
The "file only" restriction has been removed from `AssetPath`. The
`AssetServer::load_folder` API now accepts an `AssetPath` instead of a
`Path`, meaning you can load folders from other asset sources!
## Improved AssetPath Parsing
AssetPath parsing was reworked to support sources, improve error
messages, and to enable parsing with a single pass over the string.
`AssetPath::new` was replaced by `AssetPath::parse` and
`AssetPath::try_parse`.
## AssetWatcher broken out from AssetReader
`AssetReader` is no longer responsible for constructing `AssetWatcher`.
This has been moved to `AssetSourceBuilder`.
## Duplicate Event Debouncing
Asset V2 already debounced duplicate filesystem events, but this was
_input_ events. Multiple input event types can produce the same _output_
`AssetSourceEvent`. Now that we have `embedded_watcher`, which does
expensive file io on events, it made sense to debounce output events
too, so I added that! This will also benefit the AssetProcessor by
preventing integrity checks for duplicate events (and helps keep the
noise down in trace logs).
## Next Steps
* **Port Built-in Shaders**: Currently the primary (and essentially
only) user of `load_interal_asset` in Bevy's source code is "built-in
shaders". I chose not to do that in this PR for a few reasons:
1. We need to add the ability to pass shader defs in to shaders via meta
files. Some shaders (such as MESH_VIEW_TYPES) need to pass shader def
values in that are defined in code.
2. We need to revisit the current shader module naming system. I think
we _probably_ want to imply modules from source structure (at least by
default). Ideally in a way that can losslessly convert asset paths
to/from shader modules (to enable the asset system to resolve modules
using the asset server).
3. I want to keep this change set minimal / get this merged first.
* **Deprecate `load_internal_asset`**: we can't do that until we do (1)
and (2)
* **Relative Asset Paths**: This PR significantly increases the need for
relative asset paths (which was already pretty high). Currently when
loading dependencies, it is assumed to be an absolute path, which means
if in an `AssetLoader` you call `context.load("some/path/image.png")` it
will assume that is the "default" asset source, _even if the current
asset is in a different asset source_. This will cause breakage for
AssetLoaders that are not designed to add the current source to whatever
paths are being used. AssetLoaders should generally not need to be aware
of the name of their current asset source, or need to think about the
"current asset source" generally. We should build apis that support
relative asset paths and then encourage using relative paths as much as
possible (both via api design and docs). Relative paths are also
important because they will allow developers to move folders around
(even across providers) without reprocessing, provided there is no path
breakage.
2023-10-13 23:17:32 +00:00
|file_watcher|Enables watching the filesystem for Bevy Asset hot-reloading|
2023-02-28 14:24:47 +00:00
|flac|FLAC audio format support|
2023-03-28 20:18:50 +00:00
|glam_assert|Enable assertions to check the validity of parameters passed to glam|
2024-03-06 17:48:17 +00:00
|ios_simulator|Enable support for the ios_simulator by downgrading some rendering capabilities|
2023-02-28 14:24:47 +00:00
|jpeg|JPEG image format support|
2024-03-25 19:08:27 +00:00
|meshlet|Enables the meshlet renderer for dense high-poly scenes (experimental)|
|meshlet_processor|Enables processing meshes into meshlet meshes for bevy_pbr|
2023-03-01 03:22:46 +00:00
|minimp3|MP3 audio format support (through minimp3)|
2023-02-28 14:24:47 +00:00
|mp3|MP3 audio format support|
2024-07-02 18:02:05 +00:00
|pbr_anisotropy_texture|Enable support for anisotropy texture in the `StandardMaterial` , at the risk of blowing past the global, per-shader texture limit on older/lower-end GPUs|
Implement clearcoat per the Filament and the `KHR_materials_clearcoat` specifications. (#13031)
Clearcoat is a separate material layer that represents a thin
translucent layer of a material. Examples include (from the [Filament
spec]) car paint, soda cans, and lacquered wood. This commit implements
support for clearcoat following the Filament and Khronos specifications,
marking the beginnings of support for multiple PBR layers in Bevy.
The [`KHR_materials_clearcoat`] specification describes the clearcoat
support in glTF. In Blender, applying a clearcoat to the Principled BSDF
node causes the clearcoat settings to be exported via this extension. As
of this commit, Bevy parses and reads the extension data when present in
glTF. Note that the `gltf` crate has no support for
`KHR_materials_clearcoat`; this patch therefore implements the JSON
semantics manually.
Clearcoat is integrated with `StandardMaterial`, but the code is behind
a series of `#ifdef`s that only activate when clearcoat is present.
Additionally, the `pbr_feature_layer_material_textures` Cargo feature
must be active in order to enable support for clearcoat factor maps,
clearcoat roughness maps, and clearcoat normal maps. This approach
mirrors the same pattern used by the existing transmission feature and
exists to avoid running out of texture bindings on platforms like WebGL
and WebGPU. Note that constant clearcoat factors and roughness values
*are* supported in the browser; only the relatively-less-common maps are
disabled on those platforms.
This patch refactors the lighting code in `StandardMaterial`
significantly in order to better support multiple layers in a natural
way. That code was due for a refactor in any case, so this is a nice
improvement.
A new demo, `clearcoat`, has been added. It's based on [the
corresponding three.js demo], but all the assets (aside from the skybox
and environment map) are my original work.
[Filament spec]:
https://google.github.io/filament/Filament.html#materialsystem/clearcoatmodel
[`KHR_materials_clearcoat`]:
https://github.com/KhronosGroup/glTF/blob/main/extensions/2.0/Khronos/KHR_materials_clearcoat/README.md
[the corresponding three.js demo]:
https://threejs.org/examples/webgl_materials_physical_clearcoat.html
![Screenshot 2024-04-19
101143](https://github.com/bevyengine/bevy/assets/157897/3444bcb5-5c20-490c-b0ad-53759bd47ae2)
![Screenshot 2024-04-19
102054](https://github.com/bevyengine/bevy/assets/157897/6e953944-75b8-49ef-bc71-97b0a53b3a27)
## Changelog
### Added
* `StandardMaterial` now supports a clearcoat layer, which represents a
thin translucent layer over an underlying material.
* The glTF loader now supports the `KHR_materials_clearcoat` extension,
representing materials with clearcoat layers.
## Migration Guide
* The lighting functions in the `pbr_lighting` WGSL module now have
clearcoat parameters, if `STANDARD_MATERIAL_CLEARCOAT` is defined.
* The `R` reflection vector parameter has been removed from some
lighting functions, as it was unused.
2024-05-05 22:57:05 +00:00
|pbr_multi_layer_material_textures|Enable support for multi-layer material textures in the `StandardMaterial` , at the risk of blowing past the global, per-shader texture limit on older/lower-end GPUs|
`StandardMaterial` Light Transmission (#8015)
# Objective
<img width="1920" alt="Screenshot 2023-04-26 at 01 07 34"
src="https://user-images.githubusercontent.com/418473/234467578-0f34187b-5863-4ea1-88e9-7a6bb8ce8da3.png">
This PR adds both diffuse and specular light transmission capabilities
to the `StandardMaterial`, with support for screen space refractions.
This enables realistically representing a wide range of real-world
materials, such as:
- Glass; (Including frosted glass)
- Transparent and translucent plastics;
- Various liquids and gels;
- Gemstones;
- Marble;
- Wax;
- Paper;
- Leaves;
- Porcelain.
Unlike existing support for transparency, light transmission does not
rely on fixed function alpha blending, and therefore works with both
`AlphaMode::Opaque` and `AlphaMode::Mask` materials.
## Solution
- Introduces a number of transmission related fields in the
`StandardMaterial`;
- For specular transmission:
- Adds logic to take a view main texture snapshot after the opaque
phase; (in order to perform screen space refractions)
- Introduces a new `Transmissive3d` phase to the renderer, to which all
meshes with `transmission > 0.0` materials are sent.
- Calculates a light exit point (of the approximate mesh volume) using
`ior` and `thickness` properties
- Samples the snapshot texture with an adaptive number of taps across a
`roughness`-controlled radius enabling “blurry” refractions
- For diffuse transmission:
- Approximates transmitted diffuse light by using a second, flipped +
displaced, diffuse-only Lambertian lobe for each light source.
## To Do
- [x] Figure out where `fresnel_mix()` is taking place, if at all, and
where `dielectric_specular` is being calculated, if at all, and update
them to use the `ior` value (Not a blocker, just a nice-to-have for more
correct BSDF)
- To the _best of my knowledge, this is now taking place, after
964340cdd. The fresnel mix is actually "split" into two parts in our
implementation, one `(1 - fresnel(...))` in the transmission, and
`fresnel()` in the light implementations. A surface with more
reflectance now will produce slightly dimmer transmission towards the
grazing angle, as more of the light gets reflected.
- [x] Add `transmission_texture`
- [x] Add `diffuse_transmission_texture`
- [x] Add `thickness_texture`
- [x] Add `attenuation_distance` and `attenuation_color`
- [x] Connect values to glTF loader
- [x] `transmission` and `transmission_texture`
- [x] `thickness` and `thickness_texture`
- [x] `ior`
- [ ] `diffuse_transmission` and `diffuse_transmission_texture` (needs
upstream support in `gltf` crate, not a blocker)
- [x] Add support for multiple screen space refraction “steps”
- [x] Conditionally create no transmission snapshot texture at all if
`steps == 0`
- [x] Conditionally enable/disable screen space refraction transmission
snapshots
- [x] Read from depth pre-pass to prevent refracting pixels in front of
the light exit point
- [x] Use `interleaved_gradient_noise()` function for sampling blur in a
way that benefits from TAA
- [x] Drill down a TAA `#define`, tweak some aspects of the effect
conditionally based on it
- [x] Remove const array that's crashing under HLSL (unless a new `naga`
release with https://github.com/gfx-rs/naga/pull/2496 comes out before
we merge this)
- [ ] Look into alternatives to the `switch` hack for dynamically
indexing the const array (might not be needed, compilers seem to be
decent at expanding it)
- [ ] Add pipeline keys for gating transmission (do we really want/need
this?)
- [x] Tweak some material field/function names?
## A Note on Texture Packing
_This was originally added as a comment to the
`specular_transmission_texture`, `thickness_texture` and
`diffuse_transmission_texture` documentation, I removed it since it was
more confusing than helpful, and will likely be made redundant/will need
to be updated once we have a better infrastructure for preprocessing
assets_
Due to how channels are mapped, you can more efficiently use a single
shared texture image
for configuring the following:
- R - `specular_transmission_texture`
- G - `thickness_texture`
- B - _unused_
- A - `diffuse_transmission_texture`
The `KHR_materials_diffuse_transmission` glTF extension also defines a
`diffuseTransmissionColorTexture`,
that _we don't currently support_. One might choose to pack the
intensity and color textures together,
using RGB for the color and A for the intensity, in which case this
packing advice doesn't really apply.
---
## Changelog
- Added a new `Transmissive3d` render phase for rendering specular
transmissive materials with screen space refractions
- Added rendering support for transmitted environment map light on the
`StandardMaterial` as a fallback for screen space refractions
- Added `diffuse_transmission`, `specular_transmission`, `thickness`,
`ior`, `attenuation_distance` and `attenuation_color` to the
`StandardMaterial`
- Added `diffuse_transmission_texture`, `specular_transmission_texture`,
`thickness_texture` to the `StandardMaterial`, gated behind a new
`pbr_transmission_textures` cargo feature (off by default, for maximum
hardware compatibility)
- Added `Camera3d::screen_space_specular_transmission_steps` for
controlling the number of “layers of transparency” rendered for
transmissive objects
- Added a `TransmittedShadowReceiver` component for enabling shadows in
(diffusely) transmitted light. (disabled by default, as it requires
carefully setting up the `thickness` to avoid self-shadow artifacts)
- Added support for the `KHR_materials_transmission`,
`KHR_materials_ior` and `KHR_materials_volume` glTF extensions
- Renamed items related to temporal jitter for greater consistency
## Migration Guide
- `SsaoPipelineKey::temporal_noise` has been renamed to
`SsaoPipelineKey::temporal_jitter`
- The `TAA` shader def (controlled by the presence of the
`TemporalAntiAliasSettings` component in the camera) has been replaced
with the `TEMPORAL_JITTER` shader def (controlled by the presence of the
`TemporalJitter` component in the camera)
- `MeshPipelineKey::TAA` has been replaced by
`MeshPipelineKey::TEMPORAL_JITTER`
- The `TEMPORAL_NOISE` shader def has been consolidated with
`TEMPORAL_JITTER`
2023-10-31 20:59:02 +00:00
|pbr_transmission_textures|Enable support for transmission-related textures in the `StandardMaterial` , at the risk of blowing past the global, per-shader texture limit on older/lower-end GPUs|
2023-05-16 23:51:47 +00:00
|pnm|PNM image format support, includes pam, pbm, pgm and ppm|
2024-07-14 15:55:31 +00:00
|reflect_functions|Enable function reflection|
2023-02-28 14:24:47 +00:00
|serialize|Enable serialization support through serde|
2023-04-25 19:30:48 +00:00
|shader_format_glsl|Enable support for shaders in GLSL|
|shader_format_spirv|Enable support for shaders in SPIR-V|
2023-02-28 14:24:47 +00:00
|symphonia-aac|AAC audio format support (through symphonia)|
|symphonia-all|AAC, FLAC, MP3, MP4, OGG/VORBIS, and WAV audio formats support (through symphonia)|
|symphonia-flac|FLAC audio format support (through symphonia)|
|symphonia-isomp4|MP4 audio format support (through symphonia)|
|symphonia-vorbis|OGG/VORBIS audio format support (through symphonia)|
|symphonia-wav|WAV audio format support (through symphonia)|
|tga|TGA image format support|
|trace|Tracing support|
|trace_chrome|Tracing support, saving a file in Chrome Tracing format|
|trace_tracy|Tracing support, exposing a port for Tracy|
2023-04-17 16:04:46 +00:00
|trace_tracy_memory|Tracing support, with memory profiling, exposing a port for Tracy|
2024-07-30 12:02:38 +00:00
|track_change_detection|Enables source location tracking for change detection, which can assist with debugging|
2023-02-28 14:24:47 +00:00
|wav|WAV audio format support|
|wayland|Wayland display server support|
2024-03-02 00:44:51 +00:00
|webgpu|Enable support for WebGPU in Wasm. When enabled, this feature will override the `webgl2` feature and you won't be able to run Wasm builds with WebGL2, only with WebGPU.|
Added `WebP` image format support (#8220)
# Objective
WebP is a modern image format developed by Google that offers a
significant reduction in file size compared to other image formats such
as PNG and JPEG, while still maintaining good image quality. This makes
it particularly useful for games with large numbers of images, such as
those with high-quality textures or detailed sprites, where file size
and loading times can have a significant impact on performance.
By adding support for WebP images in Bevy, game developers using this
engine can now take advantage of this modern image format and reduce the
memory usage and loading times of their games. This improvement can
ultimately result in a better gaming experience for players.
In summary, the objective of adding WebP image format support in Bevy is
to enable game developers to use a modern image format that provides
better compression rates and smaller file sizes, resulting in faster
loading times and reduced memory usage for their games.
## Solution
To add support for WebP images in Bevy, this pull request leverages the
existing `image` crate support for WebP. This implementation is easily
integrated into the existing Bevy asset-loading system. To maintain
compatibility with existing Bevy projects, WebP image support is
disabled by default, and developers can enable it by adding a feature
flag to their project's `Cargo.toml` file. With this feature, Bevy
becomes even more versatile for game developers and provides a valuable
addition to the game engine.
---
## Changelog
- Added support for WebP image format in Bevy game engine
## Migration Guide
To enable WebP image support in your Bevy project, add the following
line to your project's Cargo.toml file:
```toml
bevy = { version = "*", features = ["webp"]}
```
2023-03-28 19:53:55 +00:00
|webp|WebP image format support|
2023-02-28 14:24:47 +00:00
|zlib|For KTX2 supercompression|