bevy/examples/shader/post_processing.rs
Carter Anderson 35073cf7aa
Multiple Asset Sources (#9885)
This adds support for **Multiple Asset Sources**. You can now register a
named `AssetSource`, which you can load assets from like you normally
would:

```rust
let shader: Handle<Shader> = asset_server.load("custom_source://path/to/shader.wgsl");
```

Notice that `AssetPath` now supports `some_source://` syntax. This can
now be accessed through the `asset_path.source()` accessor.

Asset source names _are not required_. If one is not specified, the
default asset source will be used:

```rust
let shader: Handle<Shader> = asset_server.load("path/to/shader.wgsl");
```

The behavior of the default asset source has not changed. Ex: the
`assets` folder is still the default.

As referenced in #9714

## Why?

**Multiple Asset Sources** enables a number of often-asked-for
scenarios:

* **Loading some assets from other locations on disk**: you could create
a `config` asset source that reads from the OS-default config folder
(not implemented in this PR)
* **Loading some assets from a remote server**: you could register a new
`remote` asset source that reads some assets from a remote http server
(not implemented in this PR)
* **Improved "Binary Embedded" Assets**: we can use this system for
"embedded-in-binary assets", which allows us to replace the old
`load_internal_asset!` approach, which couldn't support asset
processing, didn't support hot-reloading _well_, and didn't make
embedded assets accessible to the `AssetServer` (implemented in this pr)

## Adding New Asset Sources

An `AssetSource` is "just" a collection of `AssetReader`, `AssetWriter`,
and `AssetWatcher` entries. You can configure new asset sources like
this:

```rust
app.register_asset_source(
    "other",
    AssetSource::build()
        .with_reader(|| Box::new(FileAssetReader::new("other")))
    )
)
```

Note that `AssetSource` construction _must_ be repeatable, which is why
a closure is accepted.
`AssetSourceBuilder` supports `with_reader`, `with_writer`,
`with_watcher`, `with_processed_reader`, `with_processed_writer`, and
`with_processed_watcher`.

Note that the "asset source" system replaces the old "asset providers"
system.

## Processing Multiple Sources

The `AssetProcessor` now supports multiple asset sources! Processed
assets can refer to assets in other sources and everything "just works".
Each `AssetSource` defines an unprocessed and processed `AssetReader` /
`AssetWriter`.

Currently this is all or nothing for a given `AssetSource`. A given
source is either processed or it is not. Later we might want to add
support for "lazy asset processing", where an `AssetSource` (such as a
remote server) can be configured to only process assets that are
directly referenced by local assets (in order to save local disk space
and avoid doing extra work).

## A new `AssetSource`: `embedded`

One of the big features motivating **Multiple Asset Sources** was
improving our "embedded-in-binary" asset loading. To prove out the
**Multiple Asset Sources** implementation, I chose to build a new
`embedded` `AssetSource`, which replaces the old `load_interal_asset!`
system.

The old `load_internal_asset!` approach had a number of issues:

* The `AssetServer` was not aware of (or capable of loading) internal
assets.
* Because internal assets weren't visible to the `AssetServer`, they
could not be processed (or used by assets that are processed). This
would prevent things "preprocessing shaders that depend on built in Bevy
shaders", which is something we desperately need to start doing.
* Each "internal asset" needed a UUID to be defined in-code to reference
it. This was very manual and toilsome.

The new `embedded` `AssetSource` enables the following pattern:

```rust
// Called in `crates/bevy_pbr/src/render/mesh.rs`
embedded_asset!(app, "mesh.wgsl");

// later in the app
let shader: Handle<Shader> = asset_server.load("embedded://bevy_pbr/render/mesh.wgsl");
```

Notice that this always treats the crate name as the "root path", and it
trims out the `src` path for brevity. This is generally predictable, but
if you need to debug you can use the new `embedded_path!` macro to get a
`PathBuf` that matches the one used by `embedded_asset`.

You can also reference embedded assets in arbitrary assets, such as WGSL
shaders:

```rust
#import "embedded://bevy_pbr/render/mesh.wgsl"
```

This also makes `embedded` assets go through the "normal" asset
lifecycle. They are only loaded when they are actually used!

We are also discussing implicitly converting asset paths to/from shader
modules, so in the future (not in this PR) you might be able to load it
like this:

```rust
#import bevy_pbr::render::mesh::Vertex
```

Compare that to the old system!

```rust
pub const MESH_SHADER_HANDLE: Handle<Shader> = Handle::weak_from_u128(3252377289100772450);

load_internal_asset!(app, MESH_SHADER_HANDLE, "mesh.wgsl", Shader::from_wgsl);

// The mesh asset is the _only_ accessible via MESH_SHADER_HANDLE and _cannot_ be loaded via the AssetServer.
```

## Hot Reloading `embedded`

You can enable `embedded` hot reloading by enabling the
`embedded_watcher` cargo feature:

```
cargo run --features=embedded_watcher
```

## Improved Hot Reloading Workflow

First: the `filesystem_watcher` cargo feature has been renamed to
`file_watcher` for brevity (and to match the `FileAssetReader` naming
convention).

More importantly, hot asset reloading is no longer configured in-code by
default. If you enable any asset watcher feature (such as `file_watcher`
or `rust_source_watcher`), asset watching will be automatically enabled.

This removes the need to _also_ enable hot reloading in your app code.
That means you can replace this:

```rust
app.add_plugins(DefaultPlugins.set(AssetPlugin::default().watch_for_changes()))
```

with this:

```rust
app.add_plugins(DefaultPlugins)
```

If you want to hot reload assets in your app during development, just
run your app like this:

```
cargo run --features=file_watcher
```

This means you can use the same code for development and deployment! To
deploy an app, just don't include the watcher feature

```
cargo build --release
```

My intent is to move to this approach for pretty much all dev workflows.
In a future PR I would like to replace `AssetMode::ProcessedDev` with a
`runtime-processor` cargo feature. We could then group all common "dev"
cargo features under a single `dev` feature:

```sh
# this would enable file_watcher, embedded_watcher, runtime-processor, and more
cargo run --features=dev
```

## AssetMode

`AssetPlugin::Unprocessed`, `AssetPlugin::Processed`, and
`AssetPlugin::ProcessedDev` have been replaced with an `AssetMode` field
on `AssetPlugin`.

```rust
// before 
app.add_plugins(DefaultPlugins.set(AssetPlugin::Processed { /* fields here */ })

// after 
app.add_plugins(DefaultPlugins.set(AssetPlugin { mode: AssetMode::Processed, ..default() })
```

This aligns `AssetPlugin` with our other struct-like plugins. The old
"source" and "destination" `AssetProvider` fields in the enum variants
have been replaced by the "asset source" system. You no longer need to
configure the AssetPlugin to "point" to custom asset providers.

## AssetServerMode

To improve the implementation of **Multiple Asset Sources**,
`AssetServer` was made aware of whether or not it is using "processed"
or "unprocessed" assets. You can check that like this:

```rust
if asset_server.mode() == AssetServerMode::Processed {
    /* do something */
}
```

Note that this refactor should also prepare the way for building "one to
many processed output files", as it makes the server aware of whether it
is loading from processed or unprocessed sources. Meaning we can store
and read processed and unprocessed assets differently!

## AssetPath can now refer to folders

The "file only" restriction has been removed from `AssetPath`. The
`AssetServer::load_folder` API now accepts an `AssetPath` instead of a
`Path`, meaning you can load folders from other asset sources!

## Improved AssetPath Parsing

AssetPath parsing was reworked to support sources, improve error
messages, and to enable parsing with a single pass over the string.
`AssetPath::new` was replaced by `AssetPath::parse` and
`AssetPath::try_parse`.

## AssetWatcher broken out from AssetReader

`AssetReader` is no longer responsible for constructing `AssetWatcher`.
This has been moved to `AssetSourceBuilder`.


## Duplicate Event Debouncing

Asset V2 already debounced duplicate filesystem events, but this was
_input_ events. Multiple input event types can produce the same _output_
`AssetSourceEvent`. Now that we have `embedded_watcher`, which does
expensive file io on events, it made sense to debounce output events
too, so I added that! This will also benefit the AssetProcessor by
preventing integrity checks for duplicate events (and helps keep the
noise down in trace logs).

## Next Steps

* **Port Built-in Shaders**: Currently the primary (and essentially
only) user of `load_interal_asset` in Bevy's source code is "built-in
shaders". I chose not to do that in this PR for a few reasons:
1. We need to add the ability to pass shader defs in to shaders via meta
files. Some shaders (such as MESH_VIEW_TYPES) need to pass shader def
values in that are defined in code.
2. We need to revisit the current shader module naming system. I think
we _probably_ want to imply modules from source structure (at least by
default). Ideally in a way that can losslessly convert asset paths
to/from shader modules (to enable the asset system to resolve modules
using the asset server).
  3. I want to keep this change set minimal / get this merged first.
* **Deprecate `load_internal_asset`**: we can't do that until we do (1)
and (2)
* **Relative Asset Paths**: This PR significantly increases the need for
relative asset paths (which was already pretty high). Currently when
loading dependencies, it is assumed to be an absolute path, which means
if in an `AssetLoader` you call `context.load("some/path/image.png")` it
will assume that is the "default" asset source, _even if the current
asset is in a different asset source_. This will cause breakage for
AssetLoaders that are not designed to add the current source to whatever
paths are being used. AssetLoaders should generally not need to be aware
of the name of their current asset source, or need to think about the
"current asset source" generally. We should build apis that support
relative asset paths and then encourage using relative paths as much as
possible (both via api design and docs). Relative paths are also
important because they will allow developers to move folders around
(even across providers) without reprocessing, provided there is no path
breakage.
2023-10-13 23:17:32 +00:00

396 lines
16 KiB
Rust

//! This example shows how to create a custom render pass that runs after the main pass
//! and reads the texture generated by the main pass.
//!
//! The example shader is a very simple implementation of chromatic aberration.
//!
//! This is a fairly low level example and assumes some familiarity with rendering concepts and wgpu.
use bevy::{
core_pipeline::{
clear_color::ClearColorConfig, core_3d,
fullscreen_vertex_shader::fullscreen_shader_vertex_state,
},
ecs::query::QueryItem,
prelude::*,
render::{
extract_component::{
ComponentUniforms, ExtractComponent, ExtractComponentPlugin, UniformComponentPlugin,
},
render_graph::{
NodeRunError, RenderGraphApp, RenderGraphContext, ViewNode, ViewNodeRunner,
},
render_resource::{
BindGroupDescriptor, BindGroupEntry, BindGroupLayout, BindGroupLayoutDescriptor,
BindGroupLayoutEntry, BindingResource, BindingType, CachedRenderPipelineId,
ColorTargetState, ColorWrites, FragmentState, MultisampleState, Operations,
PipelineCache, PrimitiveState, RenderPassColorAttachment, RenderPassDescriptor,
RenderPipelineDescriptor, Sampler, SamplerBindingType, SamplerDescriptor, ShaderStages,
ShaderType, TextureFormat, TextureSampleType, TextureViewDimension,
},
renderer::{RenderContext, RenderDevice},
texture::BevyDefault,
view::ViewTarget,
RenderApp,
},
};
fn main() {
App::new()
.add_plugins((DefaultPlugins, PostProcessPlugin))
.add_systems(Startup, setup)
.add_systems(Update, (rotate, update_settings))
.run();
}
/// It is generally encouraged to set up post processing effects as a plugin
struct PostProcessPlugin;
impl Plugin for PostProcessPlugin {
fn build(&self, app: &mut App) {
app.add_plugins((
// The settings will be a component that lives in the main world but will
// be extracted to the render world every frame.
// This makes it possible to control the effect from the main world.
// This plugin will take care of extracting it automatically.
// It's important to derive [`ExtractComponent`] on [`PostProcessingSettings`]
// for this plugin to work correctly.
ExtractComponentPlugin::<PostProcessSettings>::default(),
// The settings will also be the data used in the shader.
// This plugin will prepare the component for the GPU by creating a uniform buffer
// and writing the data to that buffer every frame.
UniformComponentPlugin::<PostProcessSettings>::default(),
));
// We need to get the render app from the main app
let Ok(render_app) = app.get_sub_app_mut(RenderApp) else {
return;
};
render_app
// Bevy's renderer uses a render graph which is a collection of nodes in a directed acyclic graph.
// It currently runs on each view/camera and executes each node in the specified order.
// It will make sure that any node that needs a dependency from another node
// only runs when that dependency is done.
//
// Each node can execute arbitrary work, but it generally runs at least one render pass.
// A node only has access to the render world, so if you need data from the main world
// you need to extract it manually or with the plugin like above.
// Add a [`Node`] to the [`RenderGraph`]
// The Node needs to impl FromWorld
//
// The [`ViewNodeRunner`] is a special [`Node`] that will automatically run the node for each view
// matching the [`ViewQuery`]
.add_render_graph_node::<ViewNodeRunner<PostProcessNode>>(
// Specify the name of the graph, in this case we want the graph for 3d
core_3d::graph::NAME,
// It also needs the name of the node
PostProcessNode::NAME,
)
.add_render_graph_edges(
core_3d::graph::NAME,
// Specify the node ordering.
// This will automatically create all required node edges to enforce the given ordering.
&[
core_3d::graph::node::TONEMAPPING,
PostProcessNode::NAME,
core_3d::graph::node::END_MAIN_PASS_POST_PROCESSING,
],
);
}
fn finish(&self, app: &mut App) {
// We need to get the render app from the main app
let Ok(render_app) = app.get_sub_app_mut(RenderApp) else {
return;
};
render_app
// Initialize the pipeline
.init_resource::<PostProcessPipeline>();
}
}
// The post process node used for the render graph
#[derive(Default)]
struct PostProcessNode;
impl PostProcessNode {
pub const NAME: &'static str = "post_process";
}
// The ViewNode trait is required by the ViewNodeRunner
impl ViewNode for PostProcessNode {
// The node needs a query to gather data from the ECS in order to do its rendering,
// but it's not a normal system so we need to define it manually.
//
// This query will only run on the view entity
type ViewQuery = &'static ViewTarget;
// Runs the node logic
// This is where you encode draw commands.
//
// This will run on every view on which the graph is running.
// If you don't want your effect to run on every camera,
// you'll need to make sure you have a marker component as part of [`ViewQuery`]
// to identify which camera(s) should run the effect.
fn run(
&self,
_graph: &mut RenderGraphContext,
render_context: &mut RenderContext,
view_target: QueryItem<Self::ViewQuery>,
world: &World,
) -> Result<(), NodeRunError> {
// Get the pipeline resource that contains the global data we need
// to create the render pipeline
let post_process_pipeline = world.resource::<PostProcessPipeline>();
// The pipeline cache is a cache of all previously created pipelines.
// It is required to avoid creating a new pipeline each frame,
// which is expensive due to shader compilation.
let pipeline_cache = world.resource::<PipelineCache>();
// Get the pipeline from the cache
let Some(pipeline) = pipeline_cache.get_render_pipeline(post_process_pipeline.pipeline_id)
else {
return Ok(());
};
// Get the settings uniform binding
let settings_uniforms = world.resource::<ComponentUniforms<PostProcessSettings>>();
let Some(settings_binding) = settings_uniforms.uniforms().binding() else {
return Ok(());
};
// This will start a new "post process write", obtaining two texture
// views from the view target - a `source` and a `destination`.
// `source` is the "current" main texture and you _must_ write into
// `destination` because calling `post_process_write()` on the
// [`ViewTarget`] will internally flip the [`ViewTarget`]'s main
// texture to the `destination` texture. Failing to do so will cause
// the current main texture information to be lost.
let post_process = view_target.post_process_write();
// The bind_group gets created each frame.
//
// Normally, you would create a bind_group in the Queue set,
// but this doesn't work with the post_process_write().
// The reason it doesn't work is because each post_process_write will alternate the source/destination.
// The only way to have the correct source/destination for the bind_group
// is to make sure you get it during the node execution.
let bind_group = render_context
.render_device()
.create_bind_group(&BindGroupDescriptor {
label: Some("post_process_bind_group"),
layout: &post_process_pipeline.layout,
// It's important for this to match the BindGroupLayout defined in the PostProcessPipeline
entries: &[
BindGroupEntry {
binding: 0,
// Make sure to use the source view
resource: BindingResource::TextureView(post_process.source),
},
BindGroupEntry {
binding: 1,
// Use the sampler created for the pipeline
resource: BindingResource::Sampler(&post_process_pipeline.sampler),
},
BindGroupEntry {
binding: 2,
// Set the settings binding
resource: settings_binding.clone(),
},
],
});
// Begin the render pass
let mut render_pass = render_context.begin_tracked_render_pass(RenderPassDescriptor {
label: Some("post_process_pass"),
color_attachments: &[Some(RenderPassColorAttachment {
// We need to specify the post process destination view here
// to make sure we write to the appropriate texture.
view: post_process.destination,
resolve_target: None,
ops: Operations::default(),
})],
depth_stencil_attachment: None,
});
// This is mostly just wgpu boilerplate for drawing a fullscreen triangle,
// using the pipeline/bind_group created above
render_pass.set_render_pipeline(pipeline);
render_pass.set_bind_group(0, &bind_group, &[]);
render_pass.draw(0..3, 0..1);
Ok(())
}
}
// This contains global data used by the render pipeline. This will be created once on startup.
#[derive(Resource)]
struct PostProcessPipeline {
layout: BindGroupLayout,
sampler: Sampler,
pipeline_id: CachedRenderPipelineId,
}
impl FromWorld for PostProcessPipeline {
fn from_world(world: &mut World) -> Self {
let render_device = world.resource::<RenderDevice>();
// We need to define the bind group layout used for our pipeline
let layout = render_device.create_bind_group_layout(&BindGroupLayoutDescriptor {
label: Some("post_process_bind_group_layout"),
entries: &[
// The screen texture
BindGroupLayoutEntry {
binding: 0,
visibility: ShaderStages::FRAGMENT,
ty: BindingType::Texture {
sample_type: TextureSampleType::Float { filterable: true },
view_dimension: TextureViewDimension::D2,
multisampled: false,
},
count: None,
},
// The sampler that will be used to sample the screen texture
BindGroupLayoutEntry {
binding: 1,
visibility: ShaderStages::FRAGMENT,
ty: BindingType::Sampler(SamplerBindingType::Filtering),
count: None,
},
// The settings uniform that will control the effect
BindGroupLayoutEntry {
binding: 2,
visibility: ShaderStages::FRAGMENT,
ty: BindingType::Buffer {
ty: bevy::render::render_resource::BufferBindingType::Uniform,
has_dynamic_offset: false,
min_binding_size: Some(PostProcessSettings::min_size()),
},
count: None,
},
],
});
// We can create the sampler here since it won't change at runtime and doesn't depend on the view
let sampler = render_device.create_sampler(&SamplerDescriptor::default());
// Get the shader handle
let shader = world
.resource::<AssetServer>()
.load("shaders/post_processing.wgsl");
let pipeline_id = world
.resource_mut::<PipelineCache>()
// This will add the pipeline to the cache and queue it's creation
.queue_render_pipeline(RenderPipelineDescriptor {
label: Some("post_process_pipeline".into()),
layout: vec![layout.clone()],
// This will setup a fullscreen triangle for the vertex state
vertex: fullscreen_shader_vertex_state(),
fragment: Some(FragmentState {
shader,
shader_defs: vec![],
// Make sure this matches the entry point of your shader.
// It can be anything as long as it matches here and in the shader.
entry_point: "fragment".into(),
targets: vec![Some(ColorTargetState {
format: TextureFormat::bevy_default(),
blend: None,
write_mask: ColorWrites::ALL,
})],
}),
// All of the following properties are not important for this effect so just use the default values.
// This struct doesn't have the Default trait implemented because not all field can have a default value.
primitive: PrimitiveState::default(),
depth_stencil: None,
multisample: MultisampleState::default(),
push_constant_ranges: vec![],
});
Self {
layout,
sampler,
pipeline_id,
}
}
}
// This is the component that will get passed to the shader
#[derive(Component, Default, Clone, Copy, ExtractComponent, ShaderType)]
struct PostProcessSettings {
intensity: f32,
// WebGL2 structs must be 16 byte aligned.
#[cfg(feature = "webgl2")]
_webgl2_padding: Vec3,
}
/// Set up a simple 3D scene
fn setup(
mut commands: Commands,
mut meshes: ResMut<Assets<Mesh>>,
mut materials: ResMut<Assets<StandardMaterial>>,
) {
// camera
commands.spawn((
Camera3dBundle {
transform: Transform::from_translation(Vec3::new(0.0, 0.0, 5.0))
.looking_at(Vec3::default(), Vec3::Y),
camera_3d: Camera3d {
clear_color: ClearColorConfig::Custom(Color::WHITE),
..default()
},
..default()
},
// Add the setting to the camera.
// This component is also used to determine on which camera to run the post processing effect.
PostProcessSettings {
intensity: 0.02,
..default()
},
));
// cube
commands.spawn((
PbrBundle {
mesh: meshes.add(Mesh::from(shape::Cube { size: 1.0 })),
material: materials.add(Color::rgb(0.8, 0.7, 0.6).into()),
transform: Transform::from_xyz(0.0, 0.5, 0.0),
..default()
},
Rotates,
));
// light
commands.spawn(PointLightBundle {
transform: Transform::from_translation(Vec3::new(0.0, 0.0, 10.0)),
..default()
});
}
#[derive(Component)]
struct Rotates;
/// Rotates any entity around the x and y axis
fn rotate(time: Res<Time>, mut query: Query<&mut Transform, With<Rotates>>) {
for mut transform in &mut query {
transform.rotate_x(0.55 * time.delta_seconds());
transform.rotate_z(0.15 * time.delta_seconds());
}
}
// Change the intensity over time to show that the effect is controlled from the main world
fn update_settings(mut settings: Query<&mut PostProcessSettings>, time: Res<Time>) {
for mut setting in &mut settings {
let mut intensity = time.elapsed_seconds().sin();
// Make it loop periodically
intensity = intensity.sin();
// Remap it to 0..1 because the intensity can't be negative
intensity = intensity * 0.5 + 0.5;
// Scale it to a more reasonable level
intensity *= 0.015;
// Set the intensity.
// This will then be extracted to the render world and uploaded to the gpu automatically by the [`UniformComponentPlugin`]
setting.intensity = intensity;
}
}