bevy/examples/shader/post_processing.rs
Alice Cecile 599e5e4e76
Migrate from LegacyColor to bevy_color::Color (#12163)
# Objective

- As part of the migration process we need to a) see the end effect of
the migration on user ergonomics b) check for serious perf regressions
c) actually migrate the code
- To accomplish this, I'm going to attempt to migrate all of the
remaining user-facing usages of `LegacyColor` in one PR, being careful
to keep a clean commit history.
- Fixes #12056.

## Solution

I've chosen to use the polymorphic `Color` type as our standard
user-facing API.

- [x] Migrate `bevy_gizmos`.
- [x] Take `impl Into<Color>` in all `bevy_gizmos` APIs
- [x] Migrate sprites
- [x] Migrate UI
- [x] Migrate `ColorMaterial`
- [x] Migrate `MaterialMesh2D`
- [x] Migrate fog
- [x] Migrate lights
- [x] Migrate StandardMaterial
- [x] Migrate wireframes
- [x] Migrate clear color
- [x] Migrate text
- [x] Migrate gltf loader
- [x] Register color types for reflection
- [x] Remove `LegacyColor`
- [x] Make sure CI passes

Incidental improvements to ease migration:

- added `Color::srgba_u8`, `Color::srgba_from_array` and friends
- added `set_alpha`, `is_fully_transparent` and `is_fully_opaque` to the
`Alpha` trait
- add and immediately deprecate (lol) `Color::rgb` and friends in favor
of more explicit and consistent `Color::srgb`
- standardized on white and black for most example text colors
- added vector field traits to `LinearRgba`: ~~`Add`, `Sub`,
`AddAssign`, `SubAssign`,~~ `Mul<f32>` and `Div<f32>`. Multiplications
and divisions do not scale alpha. `Add` and `Sub` have been cut from
this PR.
- added `LinearRgba` and `Srgba` `RED/GREEN/BLUE`
- added `LinearRgba_to_f32_array` and `LinearRgba::to_u32`

## Migration Guide

Bevy's color types have changed! Wherever you used a
`bevy::render::Color`, a `bevy::color::Color` is used instead.

These are quite similar! Both are enums storing a color in a specific
color space (or to be more precise, using a specific color model).
However, each of the different color models now has its own type.

TODO...

- `Color::rgba`, `Color::rgb`, `Color::rbga_u8`, `Color::rgb_u8`,
`Color::rgb_from_array` are now `Color::srgba`, `Color::srgb`,
`Color::srgba_u8`, `Color::srgb_u8` and `Color::srgb_from_array`.
- `Color::set_a` and `Color::a` is now `Color::set_alpha` and
`Color::alpha`. These are part of the `Alpha` trait in `bevy_color`.
- `Color::is_fully_transparent` is now part of the `Alpha` trait in
`bevy_color`
- `Color::r`, `Color::set_r`, `Color::with_r` and the equivalents for
`g`, `b` `h`, `s` and `l` have been removed due to causing silent
relatively expensive conversions. Convert your `Color` into the desired
color space, perform your operations there, and then convert it back
into a polymorphic `Color` enum.
- `Color::hex` is now `Srgba::hex`. Call `.into` or construct a
`Color::Srgba` variant manually to convert it.
- `WireframeMaterial`, `ExtractedUiNode`, `ExtractedDirectionalLight`,
`ExtractedPointLight`, `ExtractedSpotLight` and `ExtractedSprite` now
store a `LinearRgba`, rather than a polymorphic `Color`
- `Color::rgb_linear` and `Color::rgba_linear` are now
`Color::linear_rgb` and `Color::linear_rgba`
- The various CSS color constants are no longer stored directly on
`Color`. Instead, they're defined in the `Srgba` color space, and
accessed via `bevy::color::palettes::css`. Call `.into()` on them to
convert them into a `Color` for quick debugging use, and consider using
the much prettier `tailwind` palette for prototyping.
- The `LIME_GREEN` color has been renamed to `LIMEGREEN` to comply with
the standard naming.
- Vector field arithmetic operations on `Color` (add, subtract, multiply
and divide by a f32) have been removed. Instead, convert your colors
into `LinearRgba` space, and perform your operations explicitly there.
This is particularly relevant when working with emissive or HDR colors,
whose color channel values are routinely outside of the ordinary 0 to 1
range.
- `Color::as_linear_rgba_f32` has been removed. Call
`LinearRgba::to_f32_array` instead, converting if needed.
- `Color::as_linear_rgba_u32` has been removed. Call
`LinearRgba::to_u32` instead, converting if needed.
- Several other color conversion methods to transform LCH or HSL colors
into float arrays or `Vec` types have been removed. Please reimplement
these externally or open a PR to re-add them if you found them
particularly useful.
- Various methods on `Color` such as `rgb` or `hsl` to convert the color
into a specific color space have been removed. Convert into
`LinearRgba`, then to the color space of your choice.
- Various implicitly-converting color value methods on `Color` such as
`r`, `g`, `b` or `h` have been removed. Please convert it into the color
space of your choice, then check these properties.
- `Color` no longer implements `AsBindGroup`. Store a `LinearRgba`
internally instead to avoid conversion costs.

---------

Co-authored-by: Alice Cecile <alice.i.cecil@gmail.com>
Co-authored-by: Afonso Lage <lage.afonso@gmail.com>
Co-authored-by: Rob Parrett <robparrett@gmail.com>
Co-authored-by: Zachary Harrold <zac@harrold.com.au>
2024-02-29 19:35:12 +00:00

369 lines
15 KiB
Rust

//! This example shows how to create a custom render pass that runs after the main pass
//! and reads the texture generated by the main pass.
//!
//! The example shader is a very simple implementation of chromatic aberration.
//!
//! This is a fairly low level example and assumes some familiarity with rendering concepts and wgpu.
use bevy::{
core_pipeline::{
core_3d::graph::{Core3d, Node3d},
fullscreen_vertex_shader::fullscreen_shader_vertex_state,
},
ecs::query::QueryItem,
prelude::*,
render::{
extract_component::{
ComponentUniforms, ExtractComponent, ExtractComponentPlugin, UniformComponentPlugin,
},
render_graph::{
NodeRunError, RenderGraphApp, RenderGraphContext, RenderLabel, ViewNode, ViewNodeRunner,
},
render_resource::{
binding_types::{sampler, texture_2d, uniform_buffer},
*,
},
renderer::{RenderContext, RenderDevice},
texture::BevyDefault,
view::ViewTarget,
RenderApp,
},
};
fn main() {
App::new()
.add_plugins((DefaultPlugins, PostProcessPlugin))
.add_systems(Startup, setup)
.add_systems(Update, (rotate, update_settings))
.run();
}
/// It is generally encouraged to set up post processing effects as a plugin
struct PostProcessPlugin;
impl Plugin for PostProcessPlugin {
fn build(&self, app: &mut App) {
app.add_plugins((
// The settings will be a component that lives in the main world but will
// be extracted to the render world every frame.
// This makes it possible to control the effect from the main world.
// This plugin will take care of extracting it automatically.
// It's important to derive [`ExtractComponent`] on [`PostProcessingSettings`]
// for this plugin to work correctly.
ExtractComponentPlugin::<PostProcessSettings>::default(),
// The settings will also be the data used in the shader.
// This plugin will prepare the component for the GPU by creating a uniform buffer
// and writing the data to that buffer every frame.
UniformComponentPlugin::<PostProcessSettings>::default(),
));
// We need to get the render app from the main app
let Ok(render_app) = app.get_sub_app_mut(RenderApp) else {
return;
};
render_app
// Bevy's renderer uses a render graph which is a collection of nodes in a directed acyclic graph.
// It currently runs on each view/camera and executes each node in the specified order.
// It will make sure that any node that needs a dependency from another node
// only runs when that dependency is done.
//
// Each node can execute arbitrary work, but it generally runs at least one render pass.
// A node only has access to the render world, so if you need data from the main world
// you need to extract it manually or with the plugin like above.
// Add a [`Node`] to the [`RenderGraph`]
// The Node needs to impl FromWorld
//
// The [`ViewNodeRunner`] is a special [`Node`] that will automatically run the node for each view
// matching the [`ViewQuery`]
.add_render_graph_node::<ViewNodeRunner<PostProcessNode>>(
// Specify the label of the graph, in this case we want the graph for 3d
Core3d,
// It also needs the label of the node
PostProcessLabel,
)
.add_render_graph_edges(
Core3d,
// Specify the node ordering.
// This will automatically create all required node edges to enforce the given ordering.
(
Node3d::Tonemapping,
PostProcessLabel,
Node3d::EndMainPassPostProcessing,
),
);
}
fn finish(&self, app: &mut App) {
// We need to get the render app from the main app
let Ok(render_app) = app.get_sub_app_mut(RenderApp) else {
return;
};
render_app
// Initialize the pipeline
.init_resource::<PostProcessPipeline>();
}
}
#[derive(Debug, Hash, PartialEq, Eq, Clone, RenderLabel)]
struct PostProcessLabel;
// The post process node used for the render graph
#[derive(Default)]
struct PostProcessNode;
// The ViewNode trait is required by the ViewNodeRunner
impl ViewNode for PostProcessNode {
// The node needs a query to gather data from the ECS in order to do its rendering,
// but it's not a normal system so we need to define it manually.
//
// This query will only run on the view entity
type ViewQuery = (
&'static ViewTarget,
// This makes sure the node only runs on cameras with the PostProcessSettings component
&'static PostProcessSettings,
);
// Runs the node logic
// This is where you encode draw commands.
//
// This will run on every view on which the graph is running.
// If you don't want your effect to run on every camera,
// you'll need to make sure you have a marker component as part of [`ViewQuery`]
// to identify which camera(s) should run the effect.
fn run(
&self,
_graph: &mut RenderGraphContext,
render_context: &mut RenderContext,
(view_target, _post_process_settings): QueryItem<Self::ViewQuery>,
world: &World,
) -> Result<(), NodeRunError> {
// Get the pipeline resource that contains the global data we need
// to create the render pipeline
let post_process_pipeline = world.resource::<PostProcessPipeline>();
// The pipeline cache is a cache of all previously created pipelines.
// It is required to avoid creating a new pipeline each frame,
// which is expensive due to shader compilation.
let pipeline_cache = world.resource::<PipelineCache>();
// Get the pipeline from the cache
let Some(pipeline) = pipeline_cache.get_render_pipeline(post_process_pipeline.pipeline_id)
else {
return Ok(());
};
// Get the settings uniform binding
let settings_uniforms = world.resource::<ComponentUniforms<PostProcessSettings>>();
let Some(settings_binding) = settings_uniforms.uniforms().binding() else {
return Ok(());
};
// This will start a new "post process write", obtaining two texture
// views from the view target - a `source` and a `destination`.
// `source` is the "current" main texture and you _must_ write into
// `destination` because calling `post_process_write()` on the
// [`ViewTarget`] will internally flip the [`ViewTarget`]'s main
// texture to the `destination` texture. Failing to do so will cause
// the current main texture information to be lost.
let post_process = view_target.post_process_write();
// The bind_group gets created each frame.
//
// Normally, you would create a bind_group in the Queue set,
// but this doesn't work with the post_process_write().
// The reason it doesn't work is because each post_process_write will alternate the source/destination.
// The only way to have the correct source/destination for the bind_group
// is to make sure you get it during the node execution.
let bind_group = render_context.render_device().create_bind_group(
"post_process_bind_group",
&post_process_pipeline.layout,
// It's important for this to match the BindGroupLayout defined in the PostProcessPipeline
&BindGroupEntries::sequential((
// Make sure to use the source view
post_process.source,
// Use the sampler created for the pipeline
&post_process_pipeline.sampler,
// Set the settings binding
settings_binding.clone(),
)),
);
// Begin the render pass
let mut render_pass = render_context.begin_tracked_render_pass(RenderPassDescriptor {
label: Some("post_process_pass"),
color_attachments: &[Some(RenderPassColorAttachment {
// We need to specify the post process destination view here
// to make sure we write to the appropriate texture.
view: post_process.destination,
resolve_target: None,
ops: Operations::default(),
})],
depth_stencil_attachment: None,
timestamp_writes: None,
occlusion_query_set: None,
});
// This is mostly just wgpu boilerplate for drawing a fullscreen triangle,
// using the pipeline/bind_group created above
render_pass.set_render_pipeline(pipeline);
render_pass.set_bind_group(0, &bind_group, &[]);
render_pass.draw(0..3, 0..1);
Ok(())
}
}
// This contains global data used by the render pipeline. This will be created once on startup.
#[derive(Resource)]
struct PostProcessPipeline {
layout: BindGroupLayout,
sampler: Sampler,
pipeline_id: CachedRenderPipelineId,
}
impl FromWorld for PostProcessPipeline {
fn from_world(world: &mut World) -> Self {
let render_device = world.resource::<RenderDevice>();
// We need to define the bind group layout used for our pipeline
let layout = render_device.create_bind_group_layout(
"post_process_bind_group_layout",
&BindGroupLayoutEntries::sequential(
// The layout entries will only be visible in the fragment stage
ShaderStages::FRAGMENT,
(
// The screen texture
texture_2d(TextureSampleType::Float { filterable: true }),
// The sampler that will be used to sample the screen texture
sampler(SamplerBindingType::Filtering),
// The settings uniform that will control the effect
uniform_buffer::<PostProcessSettings>(false),
),
),
);
// We can create the sampler here since it won't change at runtime and doesn't depend on the view
let sampler = render_device.create_sampler(&SamplerDescriptor::default());
// Get the shader handle
let shader = world.load_asset("shaders/post_processing.wgsl");
let pipeline_id = world
.resource_mut::<PipelineCache>()
// This will add the pipeline to the cache and queue it's creation
.queue_render_pipeline(RenderPipelineDescriptor {
label: Some("post_process_pipeline".into()),
layout: vec![layout.clone()],
// This will setup a fullscreen triangle for the vertex state
vertex: fullscreen_shader_vertex_state(),
fragment: Some(FragmentState {
shader,
shader_defs: vec![],
// Make sure this matches the entry point of your shader.
// It can be anything as long as it matches here and in the shader.
entry_point: "fragment".into(),
targets: vec![Some(ColorTargetState {
format: TextureFormat::bevy_default(),
blend: None,
write_mask: ColorWrites::ALL,
})],
}),
// All of the following properties are not important for this effect so just use the default values.
// This struct doesn't have the Default trait implemented because not all field can have a default value.
primitive: PrimitiveState::default(),
depth_stencil: None,
multisample: MultisampleState::default(),
push_constant_ranges: vec![],
});
Self {
layout,
sampler,
pipeline_id,
}
}
}
// This is the component that will get passed to the shader
#[derive(Component, Default, Clone, Copy, ExtractComponent, ShaderType)]
struct PostProcessSettings {
intensity: f32,
// WebGL2 structs must be 16 byte aligned.
#[cfg(feature = "webgl2")]
_webgl2_padding: Vec3,
}
/// Set up a simple 3D scene
fn setup(
mut commands: Commands,
mut meshes: ResMut<Assets<Mesh>>,
mut materials: ResMut<Assets<StandardMaterial>>,
) {
// camera
commands.spawn((
Camera3dBundle {
transform: Transform::from_translation(Vec3::new(0.0, 0.0, 5.0))
.looking_at(Vec3::default(), Vec3::Y),
camera: Camera {
clear_color: Color::WHITE.into(),
..default()
},
..default()
},
// Add the setting to the camera.
// This component is also used to determine on which camera to run the post processing effect.
PostProcessSettings {
intensity: 0.02,
..default()
},
));
// cube
commands.spawn((
PbrBundle {
mesh: meshes.add(Cuboid::default()),
material: materials.add(Color::srgb(0.8, 0.7, 0.6)),
transform: Transform::from_xyz(0.0, 0.5, 0.0),
..default()
},
Rotates,
));
// light
commands.spawn(DirectionalLightBundle {
directional_light: DirectionalLight {
illuminance: 1_000.,
..default()
},
..default()
});
}
#[derive(Component)]
struct Rotates;
/// Rotates any entity around the x and y axis
fn rotate(time: Res<Time>, mut query: Query<&mut Transform, With<Rotates>>) {
for mut transform in &mut query {
transform.rotate_x(0.55 * time.delta_seconds());
transform.rotate_z(0.15 * time.delta_seconds());
}
}
// Change the intensity over time to show that the effect is controlled from the main world
fn update_settings(mut settings: Query<&mut PostProcessSettings>, time: Res<Time>) {
for mut setting in &mut settings {
let mut intensity = time.elapsed_seconds().sin();
// Make it loop periodically
intensity = intensity.sin();
// Remap it to 0..1 because the intensity can't be negative
intensity = intensity * 0.5 + 0.5;
// Scale it to a more reasonable level
intensity *= 0.015;
// Set the intensity.
// This will then be extracted to the render world and uploaded to the gpu automatically by the [`UniformComponentPlugin`]
setting.intensity = intensity;
}
}