bevy/examples/3d/motion_blur.rs

407 lines
14 KiB
Rust
Raw Normal View History

Per-Object Motion Blur (#9924) https://github.com/bevyengine/bevy/assets/2632925/e046205e-3317-47c3-9959-fc94c529f7e0 # Objective - Adds per-object motion blur to the core 3d pipeline. This is a common effect used in games and other simulations. - Partially resolves #4710 ## Solution - This is a post-process effect that uses the depth and motion vector buffers to estimate per-object motion blur. The implementation is combined from knowledge from multiple papers and articles. The approach itself, and the shader are quite simple. Most of the effort was in wiring up the bevy rendering plumbing, and properly specializing for HDR and MSAA. - To work with MSAA, the MULTISAMPLED_SHADING wgpu capability is required. I've extracted this code from #9000. This is because the prepass buffers are multisampled, and require accessing with `textureLoad` as opposed to the widely compatible `textureSample`. - Added an example to demonstrate the effect of motion blur parameters. ## Future Improvements - While this approach does have limitations, it's one of the most commonly used, and is much better than camera motion blur, which does not consider object velocity. For example, this implementation allows a dolly to track an object, and that object will remain unblurred while the background is blurred. The biggest issue with this implementation is that blur is constrained to the boundaries of objects which results in hard edges. There are solutions to this by either dilating the object or the motion vector buffer, or by taking a different approach such as https://casual-effects.com/research/McGuire2012Blur/index.html - I'm using a noise PRNG function to jitter samples. This could be replaced with a blue noise texture lookup or similar, however after playing with the parameters, it gives quite nice results with 4 samples, and is significantly better than the artifacts generated when not jittering. --- ## Changelog - Added: per-object motion blur. This can be enabled and configured by adding the `MotionBlurBundle` to a camera entity. --------- Co-authored-by: Torstein Grindvik <52322338+torsteingrindvik@users.noreply.github.com>
2024-04-24 18:16:02 -07:00
//! Demonstrates how to enable per-object motion blur. This rendering feature can be configured per
//! camera using the [`MotionBlur`] component.z
use bevy::{
core_pipeline::motion_blur::{MotionBlur, MotionBlurBundle},
prelude::*,
};
fn main() {
let mut app = App::new();
// MSAA and Motion Blur together are not compatible on WebGL
#[cfg(all(feature = "webgl2", target_arch = "wasm32", not(feature = "webgpu")))]
app.insert_resource(Msaa::Off);
app.add_plugins(DefaultPlugins)
Per-Object Motion Blur (#9924) https://github.com/bevyengine/bevy/assets/2632925/e046205e-3317-47c3-9959-fc94c529f7e0 # Objective - Adds per-object motion blur to the core 3d pipeline. This is a common effect used in games and other simulations. - Partially resolves #4710 ## Solution - This is a post-process effect that uses the depth and motion vector buffers to estimate per-object motion blur. The implementation is combined from knowledge from multiple papers and articles. The approach itself, and the shader are quite simple. Most of the effort was in wiring up the bevy rendering plumbing, and properly specializing for HDR and MSAA. - To work with MSAA, the MULTISAMPLED_SHADING wgpu capability is required. I've extracted this code from #9000. This is because the prepass buffers are multisampled, and require accessing with `textureLoad` as opposed to the widely compatible `textureSample`. - Added an example to demonstrate the effect of motion blur parameters. ## Future Improvements - While this approach does have limitations, it's one of the most commonly used, and is much better than camera motion blur, which does not consider object velocity. For example, this implementation allows a dolly to track an object, and that object will remain unblurred while the background is blurred. The biggest issue with this implementation is that blur is constrained to the boundaries of objects which results in hard edges. There are solutions to this by either dilating the object or the motion vector buffer, or by taking a different approach such as https://casual-effects.com/research/McGuire2012Blur/index.html - I'm using a noise PRNG function to jitter samples. This could be replaced with a blue noise texture lookup or similar, however after playing with the parameters, it gives quite nice results with 4 samples, and is significantly better than the artifacts generated when not jittering. --- ## Changelog - Added: per-object motion blur. This can be enabled and configured by adding the `MotionBlurBundle` to a camera entity. --------- Co-authored-by: Torstein Grindvik <52322338+torsteingrindvik@users.noreply.github.com>
2024-04-24 18:16:02 -07:00
.add_systems(Startup, (setup_camera, setup_scene, setup_ui))
.add_systems(Update, (keyboard_inputs, move_cars, move_camera).chain())
.run();
}
fn setup_camera(mut commands: Commands) {
commands.spawn((
Camera3dBundle::default(),
// Add the MotionBlurBundle to a camera to enable motion blur.
// Motion blur requires the depth and motion vector prepass, which this bundle adds.
// Configure the amount and quality of motion blur per-camera using this component.
MotionBlurBundle {
motion_blur: MotionBlur {
shutter_angle: 1.0,
samples: 2,
#[cfg(all(feature = "webgl2", target_arch = "wasm32", not(feature = "webgpu")))]
_webgl2_padding: Default::default(),
Per-Object Motion Blur (#9924) https://github.com/bevyengine/bevy/assets/2632925/e046205e-3317-47c3-9959-fc94c529f7e0 # Objective - Adds per-object motion blur to the core 3d pipeline. This is a common effect used in games and other simulations. - Partially resolves #4710 ## Solution - This is a post-process effect that uses the depth and motion vector buffers to estimate per-object motion blur. The implementation is combined from knowledge from multiple papers and articles. The approach itself, and the shader are quite simple. Most of the effort was in wiring up the bevy rendering plumbing, and properly specializing for HDR and MSAA. - To work with MSAA, the MULTISAMPLED_SHADING wgpu capability is required. I've extracted this code from #9000. This is because the prepass buffers are multisampled, and require accessing with `textureLoad` as opposed to the widely compatible `textureSample`. - Added an example to demonstrate the effect of motion blur parameters. ## Future Improvements - While this approach does have limitations, it's one of the most commonly used, and is much better than camera motion blur, which does not consider object velocity. For example, this implementation allows a dolly to track an object, and that object will remain unblurred while the background is blurred. The biggest issue with this implementation is that blur is constrained to the boundaries of objects which results in hard edges. There are solutions to this by either dilating the object or the motion vector buffer, or by taking a different approach such as https://casual-effects.com/research/McGuire2012Blur/index.html - I'm using a noise PRNG function to jitter samples. This could be replaced with a blue noise texture lookup or similar, however after playing with the parameters, it gives quite nice results with 4 samples, and is significantly better than the artifacts generated when not jittering. --- ## Changelog - Added: per-object motion blur. This can be enabled and configured by adding the `MotionBlurBundle` to a camera entity. --------- Co-authored-by: Torstein Grindvik <52322338+torsteingrindvik@users.noreply.github.com>
2024-04-24 18:16:02 -07:00
},
..default()
},
));
}
// Everything past this point is used to build the example, but isn't required to use motion blur.
#[derive(Resource)]
enum CameraMode {
Track,
Chase,
}
#[derive(Component)]
struct Moves(f32);
#[derive(Component)]
struct CameraTracked;
#[derive(Component)]
struct Rotates;
fn setup_scene(
asset_server: Res<AssetServer>,
mut images: ResMut<Assets<Image>>,
mut commands: Commands,
mut meshes: ResMut<Assets<Mesh>>,
mut materials: ResMut<Assets<StandardMaterial>>,
) {
commands.insert_resource(AmbientLight {
color: Color::WHITE,
brightness: 300.0,
});
commands.insert_resource(CameraMode::Chase);
commands.spawn(DirectionalLightBundle {
directional_light: DirectionalLight {
illuminance: 3_000.0,
shadows_enabled: true,
..default()
},
transform: Transform::default().looking_to(Vec3::new(-1.0, -0.7, -1.0), Vec3::X),
..default()
});
// Sky
commands.spawn(PbrBundle {
mesh: meshes.add(Sphere::default()),
material: materials.add(StandardMaterial {
unlit: true,
base_color: Color::linear_rgb(0.1, 0.6, 1.0),
..default()
}),
transform: Transform::default().with_scale(Vec3::splat(-4000.0)),
..default()
});
// Ground
let mut plane: Mesh = Plane3d::default().into();
let uv_size = 4000.0;
let uvs = vec![[uv_size, 0.0], [0.0, 0.0], [0.0, uv_size], [uv_size; 2]];
plane.insert_attribute(Mesh::ATTRIBUTE_UV_0, uvs);
commands.spawn(PbrBundle {
mesh: meshes.add(plane),
material: materials.add(StandardMaterial {
base_color: Color::WHITE,
perceptual_roughness: 1.0,
base_color_texture: Some(images.add(uv_debug_texture())),
..default()
}),
transform: Transform::from_xyz(0.0, -0.65, 0.0).with_scale(Vec3::splat(80.)),
..default()
});
spawn_cars(&asset_server, &mut meshes, &mut materials, &mut commands);
spawn_trees(&mut meshes, &mut materials, &mut commands);
spawn_barriers(&mut meshes, &mut materials, &mut commands);
}
fn spawn_cars(
asset_server: &AssetServer,
meshes: &mut Assets<Mesh>,
materials: &mut Assets<StandardMaterial>,
commands: &mut Commands,
) {
const N_CARS: usize = 20;
let box_mesh = meshes.add(Cuboid::new(0.3, 0.15, 0.55));
let cylinder = meshes.add(Cylinder::default());
let logo = asset_server.load("branding/icon.png");
let wheel_matl = materials.add(StandardMaterial {
base_color: Color::WHITE,
base_color_texture: Some(logo.clone()),
..default()
});
let mut matl = |color| {
materials.add(StandardMaterial {
base_color: color,
..default()
})
};
let colors = [
matl(Color::linear_rgb(1.0, 0.0, 0.0)),
matl(Color::linear_rgb(1.0, 1.0, 0.0)),
matl(Color::BLACK),
matl(Color::linear_rgb(0.0, 0.0, 1.0)),
matl(Color::linear_rgb(0.0, 1.0, 0.0)),
matl(Color::linear_rgb(1.0, 0.0, 1.0)),
matl(Color::linear_rgb(0.5, 0.5, 0.0)),
matl(Color::linear_rgb(1.0, 0.5, 0.0)),
];
for i in 0..N_CARS {
let color = colors[i % colors.len()].clone();
let mut entity = commands
.spawn((
PbrBundle {
mesh: box_mesh.clone(),
material: color.clone(),
transform: Transform::from_scale(Vec3::splat(0.5)),
..default()
},
Moves(i as f32 * 2.0),
))
.insert_if(CameraTracked, || i == 0);
Per-Object Motion Blur (#9924) https://github.com/bevyengine/bevy/assets/2632925/e046205e-3317-47c3-9959-fc94c529f7e0 # Objective - Adds per-object motion blur to the core 3d pipeline. This is a common effect used in games and other simulations. - Partially resolves #4710 ## Solution - This is a post-process effect that uses the depth and motion vector buffers to estimate per-object motion blur. The implementation is combined from knowledge from multiple papers and articles. The approach itself, and the shader are quite simple. Most of the effort was in wiring up the bevy rendering plumbing, and properly specializing for HDR and MSAA. - To work with MSAA, the MULTISAMPLED_SHADING wgpu capability is required. I've extracted this code from #9000. This is because the prepass buffers are multisampled, and require accessing with `textureLoad` as opposed to the widely compatible `textureSample`. - Added an example to demonstrate the effect of motion blur parameters. ## Future Improvements - While this approach does have limitations, it's one of the most commonly used, and is much better than camera motion blur, which does not consider object velocity. For example, this implementation allows a dolly to track an object, and that object will remain unblurred while the background is blurred. The biggest issue with this implementation is that blur is constrained to the boundaries of objects which results in hard edges. There are solutions to this by either dilating the object or the motion vector buffer, or by taking a different approach such as https://casual-effects.com/research/McGuire2012Blur/index.html - I'm using a noise PRNG function to jitter samples. This could be replaced with a blue noise texture lookup or similar, however after playing with the parameters, it gives quite nice results with 4 samples, and is significantly better than the artifacts generated when not jittering. --- ## Changelog - Added: per-object motion blur. This can be enabled and configured by adding the `MotionBlurBundle` to a camera entity. --------- Co-authored-by: Torstein Grindvik <52322338+torsteingrindvik@users.noreply.github.com>
2024-04-24 18:16:02 -07:00
entity.with_children(|parent| {
parent.spawn(PbrBundle {
mesh: box_mesh.clone(),
material: color,
transform: Transform::from_xyz(0.0, 0.08, 0.03)
.with_scale(Vec3::new(1.0, 1.0, 0.5)),
..default()
});
let mut spawn_wheel = |x: f32, z: f32| {
parent.spawn((
PbrBundle {
mesh: cylinder.clone(),
material: wheel_matl.clone(),
transform: Transform::from_xyz(0.14 * x, -0.045, 0.15 * z)
.with_scale(Vec3::new(0.15, 0.04, 0.15))
.with_rotation(Quat::from_rotation_z(std::f32::consts::FRAC_PI_2)),
..default()
},
Rotates,
));
};
spawn_wheel(1.0, 1.0);
spawn_wheel(1.0, -1.0);
spawn_wheel(-1.0, 1.0);
spawn_wheel(-1.0, -1.0);
});
}
}
fn spawn_barriers(
meshes: &mut Assets<Mesh>,
materials: &mut Assets<StandardMaterial>,
commands: &mut Commands,
) {
const N_CONES: usize = 100;
let capsule = meshes.add(Capsule3d::default());
let matl = materials.add(StandardMaterial {
base_color: Color::srgb_u8(255, 87, 51),
reflectance: 1.0,
..default()
});
let mut spawn_with_offset = |offset: f32| {
for i in 0..N_CONES {
let pos = race_track_pos(
offset,
(i as f32) / (N_CONES as f32) * std::f32::consts::PI * 2.0,
);
commands.spawn(PbrBundle {
mesh: capsule.clone(),
material: matl.clone(),
transform: Transform::from_xyz(pos.x, -0.65, pos.y).with_scale(Vec3::splat(0.07)),
..default()
});
}
};
spawn_with_offset(0.04);
spawn_with_offset(-0.04);
}
fn spawn_trees(
meshes: &mut Assets<Mesh>,
materials: &mut Assets<StandardMaterial>,
commands: &mut Commands,
) {
const N_TREES: usize = 30;
let capsule = meshes.add(Capsule3d::default());
let sphere = meshes.add(Sphere::default());
let leaves = materials.add(Color::linear_rgb(0.0, 1.0, 0.0));
let trunk = materials.add(Color::linear_rgb(0.4, 0.2, 0.2));
let mut spawn_with_offset = |offset: f32| {
for i in 0..N_TREES {
let pos = race_track_pos(
offset,
(i as f32) / (N_TREES as f32) * std::f32::consts::PI * 2.0,
);
let [x, z] = pos.into();
commands.spawn(PbrBundle {
mesh: sphere.clone(),
material: leaves.clone(),
transform: Transform::from_xyz(x, -0.3, z).with_scale(Vec3::splat(0.3)),
..default()
});
commands.spawn(PbrBundle {
mesh: capsule.clone(),
material: trunk.clone(),
transform: Transform::from_xyz(x, -0.5, z).with_scale(Vec3::new(0.05, 0.3, 0.05)),
..default()
});
}
};
spawn_with_offset(0.07);
spawn_with_offset(-0.07);
}
fn setup_ui(mut commands: Commands) {
let style = TextStyle::default();
Per-Object Motion Blur (#9924) https://github.com/bevyengine/bevy/assets/2632925/e046205e-3317-47c3-9959-fc94c529f7e0 # Objective - Adds per-object motion blur to the core 3d pipeline. This is a common effect used in games and other simulations. - Partially resolves #4710 ## Solution - This is a post-process effect that uses the depth and motion vector buffers to estimate per-object motion blur. The implementation is combined from knowledge from multiple papers and articles. The approach itself, and the shader are quite simple. Most of the effort was in wiring up the bevy rendering plumbing, and properly specializing for HDR and MSAA. - To work with MSAA, the MULTISAMPLED_SHADING wgpu capability is required. I've extracted this code from #9000. This is because the prepass buffers are multisampled, and require accessing with `textureLoad` as opposed to the widely compatible `textureSample`. - Added an example to demonstrate the effect of motion blur parameters. ## Future Improvements - While this approach does have limitations, it's one of the most commonly used, and is much better than camera motion blur, which does not consider object velocity. For example, this implementation allows a dolly to track an object, and that object will remain unblurred while the background is blurred. The biggest issue with this implementation is that blur is constrained to the boundaries of objects which results in hard edges. There are solutions to this by either dilating the object or the motion vector buffer, or by taking a different approach such as https://casual-effects.com/research/McGuire2012Blur/index.html - I'm using a noise PRNG function to jitter samples. This could be replaced with a blue noise texture lookup or similar, however after playing with the parameters, it gives quite nice results with 4 samples, and is significantly better than the artifacts generated when not jittering. --- ## Changelog - Added: per-object motion blur. This can be enabled and configured by adding the `MotionBlurBundle` to a camera entity. --------- Co-authored-by: Torstein Grindvik <52322338+torsteingrindvik@users.noreply.github.com>
2024-04-24 18:16:02 -07:00
commands.spawn(
TextBundle::from_sections(vec![
TextSection::new(String::new(), style.clone()),
TextSection::new(String::new(), style.clone()),
TextSection::new("1/2: -/+ shutter angle (blur amount)\n", style.clone()),
TextSection::new("3/4: -/+ sample count (blur quality)\n", style.clone()),
TextSection::new("Spacebar: cycle camera\n", style.clone()),
])
.with_style(Style {
position_type: PositionType::Absolute,
top: Val::Px(12.0),
left: Val::Px(12.0),
..default()
}),
);
}
fn keyboard_inputs(
Rename rendering components for improved consistency and clarity (#15035) # Objective The names of numerous rendering components in Bevy are inconsistent and a bit confusing. Relevant names include: - `AutoExposureSettings` - `AutoExposureSettingsUniform` - `BloomSettings` - `BloomUniform` (no `Settings`) - `BloomPrefilterSettings` - `ChromaticAberration` (no `Settings`) - `ContrastAdaptiveSharpeningSettings` - `DepthOfFieldSettings` - `DepthOfFieldUniform` (no `Settings`) - `FogSettings` - `SmaaSettings`, `Fxaa`, `TemporalAntiAliasSettings` (really inconsistent??) - `ScreenSpaceAmbientOcclusionSettings` - `ScreenSpaceReflectionsSettings` - `VolumetricFogSettings` Firstly, there's a lot of inconsistency between `Foo`/`FooSettings` and `FooUniform`/`FooSettingsUniform` and whether names are abbreviated or not. Secondly, the `Settings` post-fix seems unnecessary and a bit confusing semantically, since it makes it seem like the component is mostly just auxiliary configuration instead of the core *thing* that actually enables the feature. This will be an even bigger problem once bundles like `TemporalAntiAliasBundle` are deprecated in favor of required components, as users will expect a component named `TemporalAntiAlias` (or similar), not `TemporalAntiAliasSettings`. ## Solution Drop the `Settings` post-fix from the component names, and change some names to be more consistent. - `AutoExposure` - `AutoExposureUniform` - `Bloom` - `BloomUniform` - `BloomPrefilter` - `ChromaticAberration` - `ContrastAdaptiveSharpening` - `DepthOfField` - `DepthOfFieldUniform` - `DistanceFog` - `Smaa`, `Fxaa`, `TemporalAntiAliasing` (note: we might want to change to `Taa`, see "Discussion") - `ScreenSpaceAmbientOcclusion` - `ScreenSpaceReflections` - `VolumetricFog` I kept the old names as deprecated type aliases to make migration a bit less painful for users. We should remove them after the next release. (And let me know if I should just... not add them at all) I also added some very basic docs for a few types where they were missing, like on `Fxaa` and `DepthOfField`. ## Discussion - `TemporalAntiAliasing` is still inconsistent with `Smaa` and `Fxaa`. Consensus [on Discord](https://discord.com/channels/691052431525675048/743663924229963868/1280601167209955431) seemed to be that renaming to `Taa` would probably be fine, but I think it's a bit more controversial, and it would've required renaming a lot of related types like `TemporalAntiAliasNode`, `TemporalAntiAliasBundle`, and `TemporalAntiAliasPlugin`, so I think it's better to leave to a follow-up. - I think `Fog` should probably have a more specific name like `DistanceFog` considering it seems to be distinct from `VolumetricFog`. ~~This should probably be done in a follow-up though, so I just removed the `Settings` post-fix for now.~~ (done) --- ## Migration Guide Many rendering components have been renamed for improved consistency and clarity. - `AutoExposureSettings` → `AutoExposure` - `BloomSettings` → `Bloom` - `BloomPrefilterSettings` → `BloomPrefilter` - `ContrastAdaptiveSharpeningSettings` → `ContrastAdaptiveSharpening` - `DepthOfFieldSettings` → `DepthOfField` - `FogSettings` → `DistanceFog` - `SmaaSettings` → `Smaa` - `TemporalAntiAliasSettings` → `TemporalAntiAliasing` - `ScreenSpaceAmbientOcclusionSettings` → `ScreenSpaceAmbientOcclusion` - `ScreenSpaceReflectionsSettings` → `ScreenSpaceReflections` - `VolumetricFogSettings` → `VolumetricFog` --------- Co-authored-by: Carter Anderson <mcanders1@gmail.com>
2024-09-10 04:11:46 +03:00
mut motion_blur: Query<&mut MotionBlur>,
Per-Object Motion Blur (#9924) https://github.com/bevyengine/bevy/assets/2632925/e046205e-3317-47c3-9959-fc94c529f7e0 # Objective - Adds per-object motion blur to the core 3d pipeline. This is a common effect used in games and other simulations. - Partially resolves #4710 ## Solution - This is a post-process effect that uses the depth and motion vector buffers to estimate per-object motion blur. The implementation is combined from knowledge from multiple papers and articles. The approach itself, and the shader are quite simple. Most of the effort was in wiring up the bevy rendering plumbing, and properly specializing for HDR and MSAA. - To work with MSAA, the MULTISAMPLED_SHADING wgpu capability is required. I've extracted this code from #9000. This is because the prepass buffers are multisampled, and require accessing with `textureLoad` as opposed to the widely compatible `textureSample`. - Added an example to demonstrate the effect of motion blur parameters. ## Future Improvements - While this approach does have limitations, it's one of the most commonly used, and is much better than camera motion blur, which does not consider object velocity. For example, this implementation allows a dolly to track an object, and that object will remain unblurred while the background is blurred. The biggest issue with this implementation is that blur is constrained to the boundaries of objects which results in hard edges. There are solutions to this by either dilating the object or the motion vector buffer, or by taking a different approach such as https://casual-effects.com/research/McGuire2012Blur/index.html - I'm using a noise PRNG function to jitter samples. This could be replaced with a blue noise texture lookup or similar, however after playing with the parameters, it gives quite nice results with 4 samples, and is significantly better than the artifacts generated when not jittering. --- ## Changelog - Added: per-object motion blur. This can be enabled and configured by adding the `MotionBlurBundle` to a camera entity. --------- Co-authored-by: Torstein Grindvik <52322338+torsteingrindvik@users.noreply.github.com>
2024-04-24 18:16:02 -07:00
presses: Res<ButtonInput<KeyCode>>,
mut text: Query<&mut Text>,
mut camera: ResMut<CameraMode>,
) {
Rename rendering components for improved consistency and clarity (#15035) # Objective The names of numerous rendering components in Bevy are inconsistent and a bit confusing. Relevant names include: - `AutoExposureSettings` - `AutoExposureSettingsUniform` - `BloomSettings` - `BloomUniform` (no `Settings`) - `BloomPrefilterSettings` - `ChromaticAberration` (no `Settings`) - `ContrastAdaptiveSharpeningSettings` - `DepthOfFieldSettings` - `DepthOfFieldUniform` (no `Settings`) - `FogSettings` - `SmaaSettings`, `Fxaa`, `TemporalAntiAliasSettings` (really inconsistent??) - `ScreenSpaceAmbientOcclusionSettings` - `ScreenSpaceReflectionsSettings` - `VolumetricFogSettings` Firstly, there's a lot of inconsistency between `Foo`/`FooSettings` and `FooUniform`/`FooSettingsUniform` and whether names are abbreviated or not. Secondly, the `Settings` post-fix seems unnecessary and a bit confusing semantically, since it makes it seem like the component is mostly just auxiliary configuration instead of the core *thing* that actually enables the feature. This will be an even bigger problem once bundles like `TemporalAntiAliasBundle` are deprecated in favor of required components, as users will expect a component named `TemporalAntiAlias` (or similar), not `TemporalAntiAliasSettings`. ## Solution Drop the `Settings` post-fix from the component names, and change some names to be more consistent. - `AutoExposure` - `AutoExposureUniform` - `Bloom` - `BloomUniform` - `BloomPrefilter` - `ChromaticAberration` - `ContrastAdaptiveSharpening` - `DepthOfField` - `DepthOfFieldUniform` - `DistanceFog` - `Smaa`, `Fxaa`, `TemporalAntiAliasing` (note: we might want to change to `Taa`, see "Discussion") - `ScreenSpaceAmbientOcclusion` - `ScreenSpaceReflections` - `VolumetricFog` I kept the old names as deprecated type aliases to make migration a bit less painful for users. We should remove them after the next release. (And let me know if I should just... not add them at all) I also added some very basic docs for a few types where they were missing, like on `Fxaa` and `DepthOfField`. ## Discussion - `TemporalAntiAliasing` is still inconsistent with `Smaa` and `Fxaa`. Consensus [on Discord](https://discord.com/channels/691052431525675048/743663924229963868/1280601167209955431) seemed to be that renaming to `Taa` would probably be fine, but I think it's a bit more controversial, and it would've required renaming a lot of related types like `TemporalAntiAliasNode`, `TemporalAntiAliasBundle`, and `TemporalAntiAliasPlugin`, so I think it's better to leave to a follow-up. - I think `Fog` should probably have a more specific name like `DistanceFog` considering it seems to be distinct from `VolumetricFog`. ~~This should probably be done in a follow-up though, so I just removed the `Settings` post-fix for now.~~ (done) --- ## Migration Guide Many rendering components have been renamed for improved consistency and clarity. - `AutoExposureSettings` → `AutoExposure` - `BloomSettings` → `Bloom` - `BloomPrefilterSettings` → `BloomPrefilter` - `ContrastAdaptiveSharpeningSettings` → `ContrastAdaptiveSharpening` - `DepthOfFieldSettings` → `DepthOfField` - `FogSettings` → `DistanceFog` - `SmaaSettings` → `Smaa` - `TemporalAntiAliasSettings` → `TemporalAntiAliasing` - `ScreenSpaceAmbientOcclusionSettings` → `ScreenSpaceAmbientOcclusion` - `ScreenSpaceReflectionsSettings` → `ScreenSpaceReflections` - `VolumetricFogSettings` → `VolumetricFog` --------- Co-authored-by: Carter Anderson <mcanders1@gmail.com>
2024-09-10 04:11:46 +03:00
let mut motion_blur = motion_blur.single_mut();
Per-Object Motion Blur (#9924) https://github.com/bevyengine/bevy/assets/2632925/e046205e-3317-47c3-9959-fc94c529f7e0 # Objective - Adds per-object motion blur to the core 3d pipeline. This is a common effect used in games and other simulations. - Partially resolves #4710 ## Solution - This is a post-process effect that uses the depth and motion vector buffers to estimate per-object motion blur. The implementation is combined from knowledge from multiple papers and articles. The approach itself, and the shader are quite simple. Most of the effort was in wiring up the bevy rendering plumbing, and properly specializing for HDR and MSAA. - To work with MSAA, the MULTISAMPLED_SHADING wgpu capability is required. I've extracted this code from #9000. This is because the prepass buffers are multisampled, and require accessing with `textureLoad` as opposed to the widely compatible `textureSample`. - Added an example to demonstrate the effect of motion blur parameters. ## Future Improvements - While this approach does have limitations, it's one of the most commonly used, and is much better than camera motion blur, which does not consider object velocity. For example, this implementation allows a dolly to track an object, and that object will remain unblurred while the background is blurred. The biggest issue with this implementation is that blur is constrained to the boundaries of objects which results in hard edges. There are solutions to this by either dilating the object or the motion vector buffer, or by taking a different approach such as https://casual-effects.com/research/McGuire2012Blur/index.html - I'm using a noise PRNG function to jitter samples. This could be replaced with a blue noise texture lookup or similar, however after playing with the parameters, it gives quite nice results with 4 samples, and is significantly better than the artifacts generated when not jittering. --- ## Changelog - Added: per-object motion blur. This can be enabled and configured by adding the `MotionBlurBundle` to a camera entity. --------- Co-authored-by: Torstein Grindvik <52322338+torsteingrindvik@users.noreply.github.com>
2024-04-24 18:16:02 -07:00
if presses.just_pressed(KeyCode::Digit1) {
Rename rendering components for improved consistency and clarity (#15035) # Objective The names of numerous rendering components in Bevy are inconsistent and a bit confusing. Relevant names include: - `AutoExposureSettings` - `AutoExposureSettingsUniform` - `BloomSettings` - `BloomUniform` (no `Settings`) - `BloomPrefilterSettings` - `ChromaticAberration` (no `Settings`) - `ContrastAdaptiveSharpeningSettings` - `DepthOfFieldSettings` - `DepthOfFieldUniform` (no `Settings`) - `FogSettings` - `SmaaSettings`, `Fxaa`, `TemporalAntiAliasSettings` (really inconsistent??) - `ScreenSpaceAmbientOcclusionSettings` - `ScreenSpaceReflectionsSettings` - `VolumetricFogSettings` Firstly, there's a lot of inconsistency between `Foo`/`FooSettings` and `FooUniform`/`FooSettingsUniform` and whether names are abbreviated or not. Secondly, the `Settings` post-fix seems unnecessary and a bit confusing semantically, since it makes it seem like the component is mostly just auxiliary configuration instead of the core *thing* that actually enables the feature. This will be an even bigger problem once bundles like `TemporalAntiAliasBundle` are deprecated in favor of required components, as users will expect a component named `TemporalAntiAlias` (or similar), not `TemporalAntiAliasSettings`. ## Solution Drop the `Settings` post-fix from the component names, and change some names to be more consistent. - `AutoExposure` - `AutoExposureUniform` - `Bloom` - `BloomUniform` - `BloomPrefilter` - `ChromaticAberration` - `ContrastAdaptiveSharpening` - `DepthOfField` - `DepthOfFieldUniform` - `DistanceFog` - `Smaa`, `Fxaa`, `TemporalAntiAliasing` (note: we might want to change to `Taa`, see "Discussion") - `ScreenSpaceAmbientOcclusion` - `ScreenSpaceReflections` - `VolumetricFog` I kept the old names as deprecated type aliases to make migration a bit less painful for users. We should remove them after the next release. (And let me know if I should just... not add them at all) I also added some very basic docs for a few types where they were missing, like on `Fxaa` and `DepthOfField`. ## Discussion - `TemporalAntiAliasing` is still inconsistent with `Smaa` and `Fxaa`. Consensus [on Discord](https://discord.com/channels/691052431525675048/743663924229963868/1280601167209955431) seemed to be that renaming to `Taa` would probably be fine, but I think it's a bit more controversial, and it would've required renaming a lot of related types like `TemporalAntiAliasNode`, `TemporalAntiAliasBundle`, and `TemporalAntiAliasPlugin`, so I think it's better to leave to a follow-up. - I think `Fog` should probably have a more specific name like `DistanceFog` considering it seems to be distinct from `VolumetricFog`. ~~This should probably be done in a follow-up though, so I just removed the `Settings` post-fix for now.~~ (done) --- ## Migration Guide Many rendering components have been renamed for improved consistency and clarity. - `AutoExposureSettings` → `AutoExposure` - `BloomSettings` → `Bloom` - `BloomPrefilterSettings` → `BloomPrefilter` - `ContrastAdaptiveSharpeningSettings` → `ContrastAdaptiveSharpening` - `DepthOfFieldSettings` → `DepthOfField` - `FogSettings` → `DistanceFog` - `SmaaSettings` → `Smaa` - `TemporalAntiAliasSettings` → `TemporalAntiAliasing` - `ScreenSpaceAmbientOcclusionSettings` → `ScreenSpaceAmbientOcclusion` - `ScreenSpaceReflectionsSettings` → `ScreenSpaceReflections` - `VolumetricFogSettings` → `VolumetricFog` --------- Co-authored-by: Carter Anderson <mcanders1@gmail.com>
2024-09-10 04:11:46 +03:00
motion_blur.shutter_angle -= 0.25;
Per-Object Motion Blur (#9924) https://github.com/bevyengine/bevy/assets/2632925/e046205e-3317-47c3-9959-fc94c529f7e0 # Objective - Adds per-object motion blur to the core 3d pipeline. This is a common effect used in games and other simulations. - Partially resolves #4710 ## Solution - This is a post-process effect that uses the depth and motion vector buffers to estimate per-object motion blur. The implementation is combined from knowledge from multiple papers and articles. The approach itself, and the shader are quite simple. Most of the effort was in wiring up the bevy rendering plumbing, and properly specializing for HDR and MSAA. - To work with MSAA, the MULTISAMPLED_SHADING wgpu capability is required. I've extracted this code from #9000. This is because the prepass buffers are multisampled, and require accessing with `textureLoad` as opposed to the widely compatible `textureSample`. - Added an example to demonstrate the effect of motion blur parameters. ## Future Improvements - While this approach does have limitations, it's one of the most commonly used, and is much better than camera motion blur, which does not consider object velocity. For example, this implementation allows a dolly to track an object, and that object will remain unblurred while the background is blurred. The biggest issue with this implementation is that blur is constrained to the boundaries of objects which results in hard edges. There are solutions to this by either dilating the object or the motion vector buffer, or by taking a different approach such as https://casual-effects.com/research/McGuire2012Blur/index.html - I'm using a noise PRNG function to jitter samples. This could be replaced with a blue noise texture lookup or similar, however after playing with the parameters, it gives quite nice results with 4 samples, and is significantly better than the artifacts generated when not jittering. --- ## Changelog - Added: per-object motion blur. This can be enabled and configured by adding the `MotionBlurBundle` to a camera entity. --------- Co-authored-by: Torstein Grindvik <52322338+torsteingrindvik@users.noreply.github.com>
2024-04-24 18:16:02 -07:00
} else if presses.just_pressed(KeyCode::Digit2) {
Rename rendering components for improved consistency and clarity (#15035) # Objective The names of numerous rendering components in Bevy are inconsistent and a bit confusing. Relevant names include: - `AutoExposureSettings` - `AutoExposureSettingsUniform` - `BloomSettings` - `BloomUniform` (no `Settings`) - `BloomPrefilterSettings` - `ChromaticAberration` (no `Settings`) - `ContrastAdaptiveSharpeningSettings` - `DepthOfFieldSettings` - `DepthOfFieldUniform` (no `Settings`) - `FogSettings` - `SmaaSettings`, `Fxaa`, `TemporalAntiAliasSettings` (really inconsistent??) - `ScreenSpaceAmbientOcclusionSettings` - `ScreenSpaceReflectionsSettings` - `VolumetricFogSettings` Firstly, there's a lot of inconsistency between `Foo`/`FooSettings` and `FooUniform`/`FooSettingsUniform` and whether names are abbreviated or not. Secondly, the `Settings` post-fix seems unnecessary and a bit confusing semantically, since it makes it seem like the component is mostly just auxiliary configuration instead of the core *thing* that actually enables the feature. This will be an even bigger problem once bundles like `TemporalAntiAliasBundle` are deprecated in favor of required components, as users will expect a component named `TemporalAntiAlias` (or similar), not `TemporalAntiAliasSettings`. ## Solution Drop the `Settings` post-fix from the component names, and change some names to be more consistent. - `AutoExposure` - `AutoExposureUniform` - `Bloom` - `BloomUniform` - `BloomPrefilter` - `ChromaticAberration` - `ContrastAdaptiveSharpening` - `DepthOfField` - `DepthOfFieldUniform` - `DistanceFog` - `Smaa`, `Fxaa`, `TemporalAntiAliasing` (note: we might want to change to `Taa`, see "Discussion") - `ScreenSpaceAmbientOcclusion` - `ScreenSpaceReflections` - `VolumetricFog` I kept the old names as deprecated type aliases to make migration a bit less painful for users. We should remove them after the next release. (And let me know if I should just... not add them at all) I also added some very basic docs for a few types where they were missing, like on `Fxaa` and `DepthOfField`. ## Discussion - `TemporalAntiAliasing` is still inconsistent with `Smaa` and `Fxaa`. Consensus [on Discord](https://discord.com/channels/691052431525675048/743663924229963868/1280601167209955431) seemed to be that renaming to `Taa` would probably be fine, but I think it's a bit more controversial, and it would've required renaming a lot of related types like `TemporalAntiAliasNode`, `TemporalAntiAliasBundle`, and `TemporalAntiAliasPlugin`, so I think it's better to leave to a follow-up. - I think `Fog` should probably have a more specific name like `DistanceFog` considering it seems to be distinct from `VolumetricFog`. ~~This should probably be done in a follow-up though, so I just removed the `Settings` post-fix for now.~~ (done) --- ## Migration Guide Many rendering components have been renamed for improved consistency and clarity. - `AutoExposureSettings` → `AutoExposure` - `BloomSettings` → `Bloom` - `BloomPrefilterSettings` → `BloomPrefilter` - `ContrastAdaptiveSharpeningSettings` → `ContrastAdaptiveSharpening` - `DepthOfFieldSettings` → `DepthOfField` - `FogSettings` → `DistanceFog` - `SmaaSettings` → `Smaa` - `TemporalAntiAliasSettings` → `TemporalAntiAliasing` - `ScreenSpaceAmbientOcclusionSettings` → `ScreenSpaceAmbientOcclusion` - `ScreenSpaceReflectionsSettings` → `ScreenSpaceReflections` - `VolumetricFogSettings` → `VolumetricFog` --------- Co-authored-by: Carter Anderson <mcanders1@gmail.com>
2024-09-10 04:11:46 +03:00
motion_blur.shutter_angle += 0.25;
Per-Object Motion Blur (#9924) https://github.com/bevyengine/bevy/assets/2632925/e046205e-3317-47c3-9959-fc94c529f7e0 # Objective - Adds per-object motion blur to the core 3d pipeline. This is a common effect used in games and other simulations. - Partially resolves #4710 ## Solution - This is a post-process effect that uses the depth and motion vector buffers to estimate per-object motion blur. The implementation is combined from knowledge from multiple papers and articles. The approach itself, and the shader are quite simple. Most of the effort was in wiring up the bevy rendering plumbing, and properly specializing for HDR and MSAA. - To work with MSAA, the MULTISAMPLED_SHADING wgpu capability is required. I've extracted this code from #9000. This is because the prepass buffers are multisampled, and require accessing with `textureLoad` as opposed to the widely compatible `textureSample`. - Added an example to demonstrate the effect of motion blur parameters. ## Future Improvements - While this approach does have limitations, it's one of the most commonly used, and is much better than camera motion blur, which does not consider object velocity. For example, this implementation allows a dolly to track an object, and that object will remain unblurred while the background is blurred. The biggest issue with this implementation is that blur is constrained to the boundaries of objects which results in hard edges. There are solutions to this by either dilating the object or the motion vector buffer, or by taking a different approach such as https://casual-effects.com/research/McGuire2012Blur/index.html - I'm using a noise PRNG function to jitter samples. This could be replaced with a blue noise texture lookup or similar, however after playing with the parameters, it gives quite nice results with 4 samples, and is significantly better than the artifacts generated when not jittering. --- ## Changelog - Added: per-object motion blur. This can be enabled and configured by adding the `MotionBlurBundle` to a camera entity. --------- Co-authored-by: Torstein Grindvik <52322338+torsteingrindvik@users.noreply.github.com>
2024-04-24 18:16:02 -07:00
} else if presses.just_pressed(KeyCode::Digit3) {
Rename rendering components for improved consistency and clarity (#15035) # Objective The names of numerous rendering components in Bevy are inconsistent and a bit confusing. Relevant names include: - `AutoExposureSettings` - `AutoExposureSettingsUniform` - `BloomSettings` - `BloomUniform` (no `Settings`) - `BloomPrefilterSettings` - `ChromaticAberration` (no `Settings`) - `ContrastAdaptiveSharpeningSettings` - `DepthOfFieldSettings` - `DepthOfFieldUniform` (no `Settings`) - `FogSettings` - `SmaaSettings`, `Fxaa`, `TemporalAntiAliasSettings` (really inconsistent??) - `ScreenSpaceAmbientOcclusionSettings` - `ScreenSpaceReflectionsSettings` - `VolumetricFogSettings` Firstly, there's a lot of inconsistency between `Foo`/`FooSettings` and `FooUniform`/`FooSettingsUniform` and whether names are abbreviated or not. Secondly, the `Settings` post-fix seems unnecessary and a bit confusing semantically, since it makes it seem like the component is mostly just auxiliary configuration instead of the core *thing* that actually enables the feature. This will be an even bigger problem once bundles like `TemporalAntiAliasBundle` are deprecated in favor of required components, as users will expect a component named `TemporalAntiAlias` (or similar), not `TemporalAntiAliasSettings`. ## Solution Drop the `Settings` post-fix from the component names, and change some names to be more consistent. - `AutoExposure` - `AutoExposureUniform` - `Bloom` - `BloomUniform` - `BloomPrefilter` - `ChromaticAberration` - `ContrastAdaptiveSharpening` - `DepthOfField` - `DepthOfFieldUniform` - `DistanceFog` - `Smaa`, `Fxaa`, `TemporalAntiAliasing` (note: we might want to change to `Taa`, see "Discussion") - `ScreenSpaceAmbientOcclusion` - `ScreenSpaceReflections` - `VolumetricFog` I kept the old names as deprecated type aliases to make migration a bit less painful for users. We should remove them after the next release. (And let me know if I should just... not add them at all) I also added some very basic docs for a few types where they were missing, like on `Fxaa` and `DepthOfField`. ## Discussion - `TemporalAntiAliasing` is still inconsistent with `Smaa` and `Fxaa`. Consensus [on Discord](https://discord.com/channels/691052431525675048/743663924229963868/1280601167209955431) seemed to be that renaming to `Taa` would probably be fine, but I think it's a bit more controversial, and it would've required renaming a lot of related types like `TemporalAntiAliasNode`, `TemporalAntiAliasBundle`, and `TemporalAntiAliasPlugin`, so I think it's better to leave to a follow-up. - I think `Fog` should probably have a more specific name like `DistanceFog` considering it seems to be distinct from `VolumetricFog`. ~~This should probably be done in a follow-up though, so I just removed the `Settings` post-fix for now.~~ (done) --- ## Migration Guide Many rendering components have been renamed for improved consistency and clarity. - `AutoExposureSettings` → `AutoExposure` - `BloomSettings` → `Bloom` - `BloomPrefilterSettings` → `BloomPrefilter` - `ContrastAdaptiveSharpeningSettings` → `ContrastAdaptiveSharpening` - `DepthOfFieldSettings` → `DepthOfField` - `FogSettings` → `DistanceFog` - `SmaaSettings` → `Smaa` - `TemporalAntiAliasSettings` → `TemporalAntiAliasing` - `ScreenSpaceAmbientOcclusionSettings` → `ScreenSpaceAmbientOcclusion` - `ScreenSpaceReflectionsSettings` → `ScreenSpaceReflections` - `VolumetricFogSettings` → `VolumetricFog` --------- Co-authored-by: Carter Anderson <mcanders1@gmail.com>
2024-09-10 04:11:46 +03:00
motion_blur.samples = motion_blur.samples.saturating_sub(1);
Per-Object Motion Blur (#9924) https://github.com/bevyengine/bevy/assets/2632925/e046205e-3317-47c3-9959-fc94c529f7e0 # Objective - Adds per-object motion blur to the core 3d pipeline. This is a common effect used in games and other simulations. - Partially resolves #4710 ## Solution - This is a post-process effect that uses the depth and motion vector buffers to estimate per-object motion blur. The implementation is combined from knowledge from multiple papers and articles. The approach itself, and the shader are quite simple. Most of the effort was in wiring up the bevy rendering plumbing, and properly specializing for HDR and MSAA. - To work with MSAA, the MULTISAMPLED_SHADING wgpu capability is required. I've extracted this code from #9000. This is because the prepass buffers are multisampled, and require accessing with `textureLoad` as opposed to the widely compatible `textureSample`. - Added an example to demonstrate the effect of motion blur parameters. ## Future Improvements - While this approach does have limitations, it's one of the most commonly used, and is much better than camera motion blur, which does not consider object velocity. For example, this implementation allows a dolly to track an object, and that object will remain unblurred while the background is blurred. The biggest issue with this implementation is that blur is constrained to the boundaries of objects which results in hard edges. There are solutions to this by either dilating the object or the motion vector buffer, or by taking a different approach such as https://casual-effects.com/research/McGuire2012Blur/index.html - I'm using a noise PRNG function to jitter samples. This could be replaced with a blue noise texture lookup or similar, however after playing with the parameters, it gives quite nice results with 4 samples, and is significantly better than the artifacts generated when not jittering. --- ## Changelog - Added: per-object motion blur. This can be enabled and configured by adding the `MotionBlurBundle` to a camera entity. --------- Co-authored-by: Torstein Grindvik <52322338+torsteingrindvik@users.noreply.github.com>
2024-04-24 18:16:02 -07:00
} else if presses.just_pressed(KeyCode::Digit4) {
Rename rendering components for improved consistency and clarity (#15035) # Objective The names of numerous rendering components in Bevy are inconsistent and a bit confusing. Relevant names include: - `AutoExposureSettings` - `AutoExposureSettingsUniform` - `BloomSettings` - `BloomUniform` (no `Settings`) - `BloomPrefilterSettings` - `ChromaticAberration` (no `Settings`) - `ContrastAdaptiveSharpeningSettings` - `DepthOfFieldSettings` - `DepthOfFieldUniform` (no `Settings`) - `FogSettings` - `SmaaSettings`, `Fxaa`, `TemporalAntiAliasSettings` (really inconsistent??) - `ScreenSpaceAmbientOcclusionSettings` - `ScreenSpaceReflectionsSettings` - `VolumetricFogSettings` Firstly, there's a lot of inconsistency between `Foo`/`FooSettings` and `FooUniform`/`FooSettingsUniform` and whether names are abbreviated or not. Secondly, the `Settings` post-fix seems unnecessary and a bit confusing semantically, since it makes it seem like the component is mostly just auxiliary configuration instead of the core *thing* that actually enables the feature. This will be an even bigger problem once bundles like `TemporalAntiAliasBundle` are deprecated in favor of required components, as users will expect a component named `TemporalAntiAlias` (or similar), not `TemporalAntiAliasSettings`. ## Solution Drop the `Settings` post-fix from the component names, and change some names to be more consistent. - `AutoExposure` - `AutoExposureUniform` - `Bloom` - `BloomUniform` - `BloomPrefilter` - `ChromaticAberration` - `ContrastAdaptiveSharpening` - `DepthOfField` - `DepthOfFieldUniform` - `DistanceFog` - `Smaa`, `Fxaa`, `TemporalAntiAliasing` (note: we might want to change to `Taa`, see "Discussion") - `ScreenSpaceAmbientOcclusion` - `ScreenSpaceReflections` - `VolumetricFog` I kept the old names as deprecated type aliases to make migration a bit less painful for users. We should remove them after the next release. (And let me know if I should just... not add them at all) I also added some very basic docs for a few types where they were missing, like on `Fxaa` and `DepthOfField`. ## Discussion - `TemporalAntiAliasing` is still inconsistent with `Smaa` and `Fxaa`. Consensus [on Discord](https://discord.com/channels/691052431525675048/743663924229963868/1280601167209955431) seemed to be that renaming to `Taa` would probably be fine, but I think it's a bit more controversial, and it would've required renaming a lot of related types like `TemporalAntiAliasNode`, `TemporalAntiAliasBundle`, and `TemporalAntiAliasPlugin`, so I think it's better to leave to a follow-up. - I think `Fog` should probably have a more specific name like `DistanceFog` considering it seems to be distinct from `VolumetricFog`. ~~This should probably be done in a follow-up though, so I just removed the `Settings` post-fix for now.~~ (done) --- ## Migration Guide Many rendering components have been renamed for improved consistency and clarity. - `AutoExposureSettings` → `AutoExposure` - `BloomSettings` → `Bloom` - `BloomPrefilterSettings` → `BloomPrefilter` - `ContrastAdaptiveSharpeningSettings` → `ContrastAdaptiveSharpening` - `DepthOfFieldSettings` → `DepthOfField` - `FogSettings` → `DistanceFog` - `SmaaSettings` → `Smaa` - `TemporalAntiAliasSettings` → `TemporalAntiAliasing` - `ScreenSpaceAmbientOcclusionSettings` → `ScreenSpaceAmbientOcclusion` - `ScreenSpaceReflectionsSettings` → `ScreenSpaceReflections` - `VolumetricFogSettings` → `VolumetricFog` --------- Co-authored-by: Carter Anderson <mcanders1@gmail.com>
2024-09-10 04:11:46 +03:00
motion_blur.samples += 1;
Per-Object Motion Blur (#9924) https://github.com/bevyengine/bevy/assets/2632925/e046205e-3317-47c3-9959-fc94c529f7e0 # Objective - Adds per-object motion blur to the core 3d pipeline. This is a common effect used in games and other simulations. - Partially resolves #4710 ## Solution - This is a post-process effect that uses the depth and motion vector buffers to estimate per-object motion blur. The implementation is combined from knowledge from multiple papers and articles. The approach itself, and the shader are quite simple. Most of the effort was in wiring up the bevy rendering plumbing, and properly specializing for HDR and MSAA. - To work with MSAA, the MULTISAMPLED_SHADING wgpu capability is required. I've extracted this code from #9000. This is because the prepass buffers are multisampled, and require accessing with `textureLoad` as opposed to the widely compatible `textureSample`. - Added an example to demonstrate the effect of motion blur parameters. ## Future Improvements - While this approach does have limitations, it's one of the most commonly used, and is much better than camera motion blur, which does not consider object velocity. For example, this implementation allows a dolly to track an object, and that object will remain unblurred while the background is blurred. The biggest issue with this implementation is that blur is constrained to the boundaries of objects which results in hard edges. There are solutions to this by either dilating the object or the motion vector buffer, or by taking a different approach such as https://casual-effects.com/research/McGuire2012Blur/index.html - I'm using a noise PRNG function to jitter samples. This could be replaced with a blue noise texture lookup or similar, however after playing with the parameters, it gives quite nice results with 4 samples, and is significantly better than the artifacts generated when not jittering. --- ## Changelog - Added: per-object motion blur. This can be enabled and configured by adding the `MotionBlurBundle` to a camera entity. --------- Co-authored-by: Torstein Grindvik <52322338+torsteingrindvik@users.noreply.github.com>
2024-04-24 18:16:02 -07:00
} else if presses.just_pressed(KeyCode::Space) {
*camera = match *camera {
CameraMode::Track => CameraMode::Chase,
CameraMode::Chase => CameraMode::Track,
};
}
Rename rendering components for improved consistency and clarity (#15035) # Objective The names of numerous rendering components in Bevy are inconsistent and a bit confusing. Relevant names include: - `AutoExposureSettings` - `AutoExposureSettingsUniform` - `BloomSettings` - `BloomUniform` (no `Settings`) - `BloomPrefilterSettings` - `ChromaticAberration` (no `Settings`) - `ContrastAdaptiveSharpeningSettings` - `DepthOfFieldSettings` - `DepthOfFieldUniform` (no `Settings`) - `FogSettings` - `SmaaSettings`, `Fxaa`, `TemporalAntiAliasSettings` (really inconsistent??) - `ScreenSpaceAmbientOcclusionSettings` - `ScreenSpaceReflectionsSettings` - `VolumetricFogSettings` Firstly, there's a lot of inconsistency between `Foo`/`FooSettings` and `FooUniform`/`FooSettingsUniform` and whether names are abbreviated or not. Secondly, the `Settings` post-fix seems unnecessary and a bit confusing semantically, since it makes it seem like the component is mostly just auxiliary configuration instead of the core *thing* that actually enables the feature. This will be an even bigger problem once bundles like `TemporalAntiAliasBundle` are deprecated in favor of required components, as users will expect a component named `TemporalAntiAlias` (or similar), not `TemporalAntiAliasSettings`. ## Solution Drop the `Settings` post-fix from the component names, and change some names to be more consistent. - `AutoExposure` - `AutoExposureUniform` - `Bloom` - `BloomUniform` - `BloomPrefilter` - `ChromaticAberration` - `ContrastAdaptiveSharpening` - `DepthOfField` - `DepthOfFieldUniform` - `DistanceFog` - `Smaa`, `Fxaa`, `TemporalAntiAliasing` (note: we might want to change to `Taa`, see "Discussion") - `ScreenSpaceAmbientOcclusion` - `ScreenSpaceReflections` - `VolumetricFog` I kept the old names as deprecated type aliases to make migration a bit less painful for users. We should remove them after the next release. (And let me know if I should just... not add them at all) I also added some very basic docs for a few types where they were missing, like on `Fxaa` and `DepthOfField`. ## Discussion - `TemporalAntiAliasing` is still inconsistent with `Smaa` and `Fxaa`. Consensus [on Discord](https://discord.com/channels/691052431525675048/743663924229963868/1280601167209955431) seemed to be that renaming to `Taa` would probably be fine, but I think it's a bit more controversial, and it would've required renaming a lot of related types like `TemporalAntiAliasNode`, `TemporalAntiAliasBundle`, and `TemporalAntiAliasPlugin`, so I think it's better to leave to a follow-up. - I think `Fog` should probably have a more specific name like `DistanceFog` considering it seems to be distinct from `VolumetricFog`. ~~This should probably be done in a follow-up though, so I just removed the `Settings` post-fix for now.~~ (done) --- ## Migration Guide Many rendering components have been renamed for improved consistency and clarity. - `AutoExposureSettings` → `AutoExposure` - `BloomSettings` → `Bloom` - `BloomPrefilterSettings` → `BloomPrefilter` - `ContrastAdaptiveSharpeningSettings` → `ContrastAdaptiveSharpening` - `DepthOfFieldSettings` → `DepthOfField` - `FogSettings` → `DistanceFog` - `SmaaSettings` → `Smaa` - `TemporalAntiAliasSettings` → `TemporalAntiAliasing` - `ScreenSpaceAmbientOcclusionSettings` → `ScreenSpaceAmbientOcclusion` - `ScreenSpaceReflectionsSettings` → `ScreenSpaceReflections` - `VolumetricFogSettings` → `VolumetricFog` --------- Co-authored-by: Carter Anderson <mcanders1@gmail.com>
2024-09-10 04:11:46 +03:00
motion_blur.shutter_angle = motion_blur.shutter_angle.clamp(0.0, 1.0);
motion_blur.samples = motion_blur.samples.clamp(0, 64);
Per-Object Motion Blur (#9924) https://github.com/bevyengine/bevy/assets/2632925/e046205e-3317-47c3-9959-fc94c529f7e0 # Objective - Adds per-object motion blur to the core 3d pipeline. This is a common effect used in games and other simulations. - Partially resolves #4710 ## Solution - This is a post-process effect that uses the depth and motion vector buffers to estimate per-object motion blur. The implementation is combined from knowledge from multiple papers and articles. The approach itself, and the shader are quite simple. Most of the effort was in wiring up the bevy rendering plumbing, and properly specializing for HDR and MSAA. - To work with MSAA, the MULTISAMPLED_SHADING wgpu capability is required. I've extracted this code from #9000. This is because the prepass buffers are multisampled, and require accessing with `textureLoad` as opposed to the widely compatible `textureSample`. - Added an example to demonstrate the effect of motion blur parameters. ## Future Improvements - While this approach does have limitations, it's one of the most commonly used, and is much better than camera motion blur, which does not consider object velocity. For example, this implementation allows a dolly to track an object, and that object will remain unblurred while the background is blurred. The biggest issue with this implementation is that blur is constrained to the boundaries of objects which results in hard edges. There are solutions to this by either dilating the object or the motion vector buffer, or by taking a different approach such as https://casual-effects.com/research/McGuire2012Blur/index.html - I'm using a noise PRNG function to jitter samples. This could be replaced with a blue noise texture lookup or similar, however after playing with the parameters, it gives quite nice results with 4 samples, and is significantly better than the artifacts generated when not jittering. --- ## Changelog - Added: per-object motion blur. This can be enabled and configured by adding the `MotionBlurBundle` to a camera entity. --------- Co-authored-by: Torstein Grindvik <52322338+torsteingrindvik@users.noreply.github.com>
2024-04-24 18:16:02 -07:00
let mut text = text.single_mut();
Rename rendering components for improved consistency and clarity (#15035) # Objective The names of numerous rendering components in Bevy are inconsistent and a bit confusing. Relevant names include: - `AutoExposureSettings` - `AutoExposureSettingsUniform` - `BloomSettings` - `BloomUniform` (no `Settings`) - `BloomPrefilterSettings` - `ChromaticAberration` (no `Settings`) - `ContrastAdaptiveSharpeningSettings` - `DepthOfFieldSettings` - `DepthOfFieldUniform` (no `Settings`) - `FogSettings` - `SmaaSettings`, `Fxaa`, `TemporalAntiAliasSettings` (really inconsistent??) - `ScreenSpaceAmbientOcclusionSettings` - `ScreenSpaceReflectionsSettings` - `VolumetricFogSettings` Firstly, there's a lot of inconsistency between `Foo`/`FooSettings` and `FooUniform`/`FooSettingsUniform` and whether names are abbreviated or not. Secondly, the `Settings` post-fix seems unnecessary and a bit confusing semantically, since it makes it seem like the component is mostly just auxiliary configuration instead of the core *thing* that actually enables the feature. This will be an even bigger problem once bundles like `TemporalAntiAliasBundle` are deprecated in favor of required components, as users will expect a component named `TemporalAntiAlias` (or similar), not `TemporalAntiAliasSettings`. ## Solution Drop the `Settings` post-fix from the component names, and change some names to be more consistent. - `AutoExposure` - `AutoExposureUniform` - `Bloom` - `BloomUniform` - `BloomPrefilter` - `ChromaticAberration` - `ContrastAdaptiveSharpening` - `DepthOfField` - `DepthOfFieldUniform` - `DistanceFog` - `Smaa`, `Fxaa`, `TemporalAntiAliasing` (note: we might want to change to `Taa`, see "Discussion") - `ScreenSpaceAmbientOcclusion` - `ScreenSpaceReflections` - `VolumetricFog` I kept the old names as deprecated type aliases to make migration a bit less painful for users. We should remove them after the next release. (And let me know if I should just... not add them at all) I also added some very basic docs for a few types where they were missing, like on `Fxaa` and `DepthOfField`. ## Discussion - `TemporalAntiAliasing` is still inconsistent with `Smaa` and `Fxaa`. Consensus [on Discord](https://discord.com/channels/691052431525675048/743663924229963868/1280601167209955431) seemed to be that renaming to `Taa` would probably be fine, but I think it's a bit more controversial, and it would've required renaming a lot of related types like `TemporalAntiAliasNode`, `TemporalAntiAliasBundle`, and `TemporalAntiAliasPlugin`, so I think it's better to leave to a follow-up. - I think `Fog` should probably have a more specific name like `DistanceFog` considering it seems to be distinct from `VolumetricFog`. ~~This should probably be done in a follow-up though, so I just removed the `Settings` post-fix for now.~~ (done) --- ## Migration Guide Many rendering components have been renamed for improved consistency and clarity. - `AutoExposureSettings` → `AutoExposure` - `BloomSettings` → `Bloom` - `BloomPrefilterSettings` → `BloomPrefilter` - `ContrastAdaptiveSharpeningSettings` → `ContrastAdaptiveSharpening` - `DepthOfFieldSettings` → `DepthOfField` - `FogSettings` → `DistanceFog` - `SmaaSettings` → `Smaa` - `TemporalAntiAliasSettings` → `TemporalAntiAliasing` - `ScreenSpaceAmbientOcclusionSettings` → `ScreenSpaceAmbientOcclusion` - `ScreenSpaceReflectionsSettings` → `ScreenSpaceReflections` - `VolumetricFogSettings` → `VolumetricFog` --------- Co-authored-by: Carter Anderson <mcanders1@gmail.com>
2024-09-10 04:11:46 +03:00
text.sections[0].value = format!("Shutter angle: {:.2}\n", motion_blur.shutter_angle);
text.sections[1].value = format!("Samples: {:.5}\n", motion_blur.samples);
Per-Object Motion Blur (#9924) https://github.com/bevyengine/bevy/assets/2632925/e046205e-3317-47c3-9959-fc94c529f7e0 # Objective - Adds per-object motion blur to the core 3d pipeline. This is a common effect used in games and other simulations. - Partially resolves #4710 ## Solution - This is a post-process effect that uses the depth and motion vector buffers to estimate per-object motion blur. The implementation is combined from knowledge from multiple papers and articles. The approach itself, and the shader are quite simple. Most of the effort was in wiring up the bevy rendering plumbing, and properly specializing for HDR and MSAA. - To work with MSAA, the MULTISAMPLED_SHADING wgpu capability is required. I've extracted this code from #9000. This is because the prepass buffers are multisampled, and require accessing with `textureLoad` as opposed to the widely compatible `textureSample`. - Added an example to demonstrate the effect of motion blur parameters. ## Future Improvements - While this approach does have limitations, it's one of the most commonly used, and is much better than camera motion blur, which does not consider object velocity. For example, this implementation allows a dolly to track an object, and that object will remain unblurred while the background is blurred. The biggest issue with this implementation is that blur is constrained to the boundaries of objects which results in hard edges. There are solutions to this by either dilating the object or the motion vector buffer, or by taking a different approach such as https://casual-effects.com/research/McGuire2012Blur/index.html - I'm using a noise PRNG function to jitter samples. This could be replaced with a blue noise texture lookup or similar, however after playing with the parameters, it gives quite nice results with 4 samples, and is significantly better than the artifacts generated when not jittering. --- ## Changelog - Added: per-object motion blur. This can be enabled and configured by adding the `MotionBlurBundle` to a camera entity. --------- Co-authored-by: Torstein Grindvik <52322338+torsteingrindvik@users.noreply.github.com>
2024-04-24 18:16:02 -07:00
}
/// Parametric function for a looping race track. `offset` will return the point offset
/// perpendicular to the track at the given point.
fn race_track_pos(offset: f32, t: f32) -> Vec2 {
let x_tweak = 2.0;
let y_tweak = 3.0;
let scale = 8.0;
let x0 = (x_tweak * t).sin();
let y0 = (y_tweak * t).cos();
let dx = x_tweak * (x_tweak * t).cos();
let dy = y_tweak * -(y_tweak * t).sin();
let x = x0 + offset * dy / (dx.powi(2) + dy.powi(2)).sqrt();
let y = y0 - offset * dx / (dx.powi(2) + dy.powi(2)).sqrt();
Vec2::new(x, y) * scale
}
fn move_cars(
time: Res<Time>,
mut movables: Query<(&mut Transform, &Moves, &Children)>,
mut spins: Query<&mut Transform, (Without<Moves>, With<Rotates>)>,
) {
for (mut transform, moves, children) in &mut movables {
let time = time.elapsed_seconds() * 0.25;
let t = time + 0.5 * moves.0;
let dx = t.cos();
let dz = -(3.0 * t).sin();
let speed_variation = (dx * dx + dz * dz).sqrt() * 0.15;
let t = t + speed_variation;
let prev = transform.translation;
transform.translation.x = race_track_pos(0.0, t).x;
transform.translation.z = race_track_pos(0.0, t).y;
transform.translation.y = -0.59;
let delta = transform.translation - prev;
transform.look_to(delta, Vec3::Y);
for child in children.iter() {
let Ok(mut wheel) = spins.get_mut(*child) else {
continue;
};
let radius = wheel.scale.x;
let circumference = 2.0 * std::f32::consts::PI * radius;
let angle = delta.length() / circumference * std::f32::consts::PI * 2.0;
wheel.rotate_local_y(angle);
}
}
}
fn move_camera(
mut camera: Query<(&mut Transform, &mut Projection), Without<CameraTracked>>,
tracked: Query<&Transform, With<CameraTracked>>,
mode: Res<CameraMode>,
) {
let tracked = tracked.single();
let (mut transform, mut projection) = camera.single_mut();
match *mode {
CameraMode::Track => {
transform.look_at(tracked.translation, Vec3::Y);
transform.translation = Vec3::new(15.0, -0.5, 0.0);
if let Projection::Perspective(perspective) = &mut *projection {
perspective.fov = 0.05;
}
}
CameraMode::Chase => {
transform.translation =
tracked.translation + Vec3::new(0.0, 0.15, 0.0) + tracked.back() * 0.6;
transform.look_to(*tracked.forward(), Vec3::Y);
if let Projection::Perspective(perspective) = &mut *projection {
perspective.fov = 1.0;
}
}
}
}
fn uv_debug_texture() -> Image {
use bevy::render::{render_asset::RenderAssetUsages, render_resource::*, texture::*};
const TEXTURE_SIZE: usize = 7;
let mut palette = [
164, 164, 164, 255, 168, 168, 168, 255, 153, 153, 153, 255, 139, 139, 139, 255, 153, 153,
153, 255, 177, 177, 177, 255, 159, 159, 159, 255,
];
let mut texture_data = [0; TEXTURE_SIZE * TEXTURE_SIZE * 4];
for y in 0..TEXTURE_SIZE {
let offset = TEXTURE_SIZE * y * 4;
texture_data[offset..(offset + TEXTURE_SIZE * 4)].copy_from_slice(&palette);
palette.rotate_right(12);
}
let mut img = Image::new_fill(
Extent3d {
width: TEXTURE_SIZE as u32,
height: TEXTURE_SIZE as u32,
depth_or_array_layers: 1,
},
TextureDimension::D2,
&texture_data,
TextureFormat::Rgba8UnormSrgb,
RenderAssetUsages::RENDER_WORLD,
);
img.sampler = ImageSampler::Descriptor(ImageSamplerDescriptor {
address_mode_u: ImageAddressMode::Repeat,
address_mode_v: ImageAddressMode::MirrorRepeat,
mag_filter: ImageFilterMode::Nearest,
..ImageSamplerDescriptor::linear()
});
img
}