Implement opt-in sharp screen-space reflections for the deferred renderer, with improved raymarching code. (#13418)

This commit, a revamp of #12959, implements screen-space reflections
(SSR), which approximate real-time reflections based on raymarching
through the depth buffer and copying samples from the final rendered
frame. This patch is a relatively minimal implementation of SSR, so as
to provide a flexible base on which to customize and build in the
future. However, it's based on the production-quality [raymarching code
by Tomasz
Stachowiak](https://gist.github.com/h3r2tic/9c8356bdaefbe80b1a22ae0aaee192db).

For a general basic overview of screen-space reflections, see
[1](https://lettier.github.io/3d-game-shaders-for-beginners/screen-space-reflection.html).
The raymarching shader uses the basic algorithm of tracing forward in
large steps, refining that trace in smaller increments via binary
search, and then using the secant method. No temporal filtering or
roughness blurring, is performed at all; for this reason, SSR currently
only operates on very shiny surfaces. No acceleration via the
hierarchical Z-buffer is implemented (though note that
https://github.com/bevyengine/bevy/pull/12899 will add the
infrastructure for this). Reflections are traced at full resolution,
which is often considered slow. All of these improvements and more can
be follow-ups.

SSR is built on top of the deferred renderer and is currently only
supported in that mode. Forward screen-space reflections are possible
albeit uncommon (though e.g. *Doom Eternal* uses them); however, they
require tracing from the previous frame, which would add complexity.
This patch leaves the door open to implementing SSR in the forward
rendering path but doesn't itself have such an implementation.
Screen-space reflections aren't supported in WebGL 2, because they
require sampling from the depth buffer, which Naga can't do because of a
bug (`sampler2DShadow` is incorrectly generated instead of `sampler2D`;
this is the same reason why depth of field is disabled on that
platform).

To add screen-space reflections to a camera, use the
`ScreenSpaceReflectionsBundle` bundle or the
`ScreenSpaceReflectionsSettings` component. In addition to
`ScreenSpaceReflectionsSettings`, `DepthPrepass` and `DeferredPrepass`
must also be present for the reflections to show up. The
`ScreenSpaceReflectionsSettings` component contains several settings
that artists can tweak, and also comes with sensible defaults.

A new example, `ssr`, has been added. It's loosely based on the
[three.js ocean
sample](https://threejs.org/examples/webgl_shaders_ocean.html), but all
the assets are original. Note that the three.js demo has no screen-space
reflections and instead renders a mirror world. In contrast to #12959,
this demo tests not only a cube but also a more complex model (the
flight helmet).

## Changelog

### Added

* Screen-space reflections can be enabled for very smooth surfaces by
adding the `ScreenSpaceReflections` component to a camera. Deferred
rendering must be enabled for the reflections to appear.

![Screenshot 2024-05-18
143555](https://github.com/bevyengine/bevy/assets/157897/b8675b39-8a89-433e-a34e-1b9ee1233267)

![Screenshot 2024-05-18
143606](https://github.com/bevyengine/bevy/assets/157897/cc9e1cd0-9951-464a-9a08-e589210e5606)
This commit is contained in:
Patrick Walton 2024-05-27 06:43:40 -07:00 committed by GitHub
parent b0409f63d5
commit f398674e51
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
23 changed files with 1954 additions and 107 deletions

View file

@ -3038,6 +3038,17 @@ description = "Demonstrates visibility ranges"
category = "3D Rendering" category = "3D Rendering"
wasm = true wasm = true
[[example]]
name = "ssr"
path = "examples/3d/ssr.rs"
doc-scrape-examples = true
[package.metadata.example.ssr]
name = "Screen Space Reflections"
description = "Demonstrates screen space reflections with water ripples"
category = "3D Rendering"
wasm = false
[[example]] [[example]]
name = "color_grading" name = "color_grading"
path = "examples/3d/color_grading.rs" path = "examples/3d/color_grading.rs"

View file

@ -0,0 +1,59 @@
// A shader that creates water ripples by overlaying 4 normal maps on top of one
// another.
//
// This is used in the `ssr` example. It only supports deferred rendering.
#import bevy_pbr::{
pbr_deferred_functions::deferred_output,
pbr_fragment::pbr_input_from_standard_material,
prepass_io::{VertexOutput, FragmentOutput},
}
#import bevy_render::globals::Globals
// Parameters to the water shader.
struct WaterSettings {
// How much to displace each octave each frame, in the u and v directions.
// Two octaves are packed into each `vec4`.
octave_vectors: array<vec4<f32>, 2>,
// How wide the waves are in each octave.
octave_scales: vec4<f32>,
// How high the waves are in each octave.
octave_strengths: vec4<f32>,
}
@group(0) @binding(1) var<uniform> globals: Globals;
@group(2) @binding(100) var water_normals_texture: texture_2d<f32>;
@group(2) @binding(101) var water_normals_sampler: sampler;
@group(2) @binding(102) var<uniform> water_settings: WaterSettings;
// Samples a single octave of noise and returns the resulting normal.
fn sample_noise_octave(uv: vec2<f32>, strength: f32) -> vec3<f32> {
let N = textureSample(water_normals_texture, water_normals_sampler, uv).rbg * 2.0 - 1.0;
// This isn't slerp, but it's good enough.
return normalize(mix(vec3(0.0, 1.0, 0.0), N, strength));
}
// Samples all four octaves of noise and returns the resulting normal.
fn sample_noise(uv: vec2<f32>, time: f32) -> vec3<f32> {
let uv0 = uv * water_settings.octave_scales[0] + water_settings.octave_vectors[0].xy * time;
let uv1 = uv * water_settings.octave_scales[1] + water_settings.octave_vectors[0].zw * time;
let uv2 = uv * water_settings.octave_scales[2] + water_settings.octave_vectors[1].xy * time;
let uv3 = uv * water_settings.octave_scales[3] + water_settings.octave_vectors[1].zw * time;
return normalize(
sample_noise_octave(uv0, water_settings.octave_strengths[0]) +
sample_noise_octave(uv1, water_settings.octave_strengths[1]) +
sample_noise_octave(uv2, water_settings.octave_strengths[2]) +
sample_noise_octave(uv3, water_settings.octave_strengths[3])
);
}
@fragment
fn fragment(in: VertexOutput, @builtin(front_facing) is_front: bool) -> FragmentOutput {
// Create the PBR input.
var pbr_input = pbr_input_from_standard_material(in, is_front);
// Bump the normal.
pbr_input.N = sample_noise(in.uv, globals.time);
// Send the rest to the deferred shader.
return deferred_output(in, pbr_input);
}

Binary file not shown.

After

Width:  |  Height:  |  Size: 469 KiB

View file

@ -41,6 +41,26 @@ pub mod graph {
// PERF: vulkan docs recommend using 24 bit depth for better performance // PERF: vulkan docs recommend using 24 bit depth for better performance
pub const CORE_3D_DEPTH_FORMAT: TextureFormat = TextureFormat::Depth32Float; pub const CORE_3D_DEPTH_FORMAT: TextureFormat = TextureFormat::Depth32Float;
/// True if multisampled depth textures are supported on this platform.
///
/// In theory, Naga supports depth textures on WebGL 2. In practice, it doesn't,
/// because of a silly bug whereby Naga assumes that all depth textures are
/// `sampler2DShadow` and will cheerfully generate invalid GLSL that tries to
/// perform non-percentage-closer-filtering with such a sampler. Therefore we
/// disable depth of field and screen space reflections entirely on WebGL 2.
#[cfg(all(feature = "webgl", target_arch = "wasm32", not(feature = "webgpu")))]
pub const DEPTH_TEXTURE_SAMPLING_SUPPORTED: bool = false;
/// True if multisampled depth textures are supported on this platform.
///
/// In theory, Naga supports depth textures on WebGL 2. In practice, it doesn't,
/// because of a silly bug whereby Naga assumes that all depth textures are
/// `sampler2DShadow` and will cheerfully generate invalid GLSL that tries to
/// perform non-percentage-closer-filtering with such a sampler. Therefore we
/// disable depth of field and screen space reflections entirely on WebGL 2.
#[cfg(any(feature = "webgpu", not(target_arch = "wasm32")))]
pub const DEPTH_TEXTURE_SAMPLING_SUPPORTED: bool = true;
use std::ops::Range; use std::ops::Range;
use bevy_asset::AssetId; use bevy_asset::AssetId;

View file

@ -56,7 +56,7 @@ use smallvec::SmallVec;
use crate::{ use crate::{
core_3d::{ core_3d::{
graph::{Core3d, Node3d}, graph::{Core3d, Node3d},
Camera3d, Camera3d, DEPTH_TEXTURE_SAMPLING_SUPPORTED,
}, },
fullscreen_vertex_shader::fullscreen_shader_vertex_state, fullscreen_vertex_shader::fullscreen_shader_vertex_state,
}; };
@ -883,23 +883,3 @@ impl DepthOfFieldPipelines {
} }
} }
} }
/// Returns true if multisampled depth textures are supported on this platform.
///
/// In theory, Naga supports depth textures on WebGL 2. In practice, it doesn't,
/// because of a silly bug whereby Naga assumes that all depth textures are
/// `sampler2DShadow` and will cheerfully generate invalid GLSL that tries to
/// perform non-percentage-closer-filtering with such a sampler. Therefore we
/// disable depth of field entirely on WebGL 2.
#[cfg(all(feature = "webgl", target_arch = "wasm32", not(feature = "webgpu")))]
const DEPTH_TEXTURE_SAMPLING_SUPPORTED: bool = false;
/// Returns true if multisampled depth textures are supported on this platform.
///
/// In theory, Naga supports depth textures on WebGL 2. In practice, it doesn't,
/// because of a silly bug whereby Naga assumes that all depth textures are
/// `sampler2DShadow` and will cheerfully generate invalid GLSL that tries to
/// perform non-percentage-closer-filtering with such a sampler. Therefore we
/// disable depth of field entirely on WebGL 2.
#[cfg(any(feature = "webgpu", not(target_arch = "wasm32")))]
const DEPTH_TEXTURE_SAMPLING_SUPPORTED: bool = true;

View file

@ -11,8 +11,8 @@
@group(0) @binding(3) var dt_lut_texture: texture_3d<f32>; @group(0) @binding(3) var dt_lut_texture: texture_3d<f32>;
@group(0) @binding(4) var dt_lut_sampler: sampler; @group(0) @binding(4) var dt_lut_sampler: sampler;
#else #else
@group(0) @binding(19) var dt_lut_texture: texture_3d<f32>; @group(0) @binding(20) var dt_lut_texture: texture_3d<f32>;
@group(0) @binding(20) var dt_lut_sampler: sampler; @group(0) @binding(21) var dt_lut_sampler: sampler;
#endif #endif
// Half the size of the crossfade region between shadows and midtones and // Half the size of the crossfade region between shadows and midtones and

View file

@ -1,7 +1,8 @@
use crate::{ use crate::{
graph::NodePbr, irradiance_volume::IrradianceVolume, prelude::EnvironmentMapLight, graph::NodePbr, irradiance_volume::IrradianceVolume, prelude::EnvironmentMapLight,
MeshPipeline, MeshViewBindGroup, RenderViewLightProbes, ScreenSpaceAmbientOcclusionSettings, MeshPipeline, MeshViewBindGroup, RenderViewLightProbes, ScreenSpaceAmbientOcclusionSettings,
ViewLightProbesUniformOffset, ScreenSpaceReflectionsUniform, ViewLightProbesUniformOffset,
ViewScreenSpaceReflectionsUniformOffset,
}; };
use bevy_app::prelude::*; use bevy_app::prelude::*;
use bevy_asset::{load_internal_asset, Handle}; use bevy_asset::{load_internal_asset, Handle};
@ -147,6 +148,7 @@ impl ViewNode for DeferredOpaquePass3dPbrLightingNode {
&'static ViewLightsUniformOffset, &'static ViewLightsUniformOffset,
&'static ViewFogUniformOffset, &'static ViewFogUniformOffset,
&'static ViewLightProbesUniformOffset, &'static ViewLightProbesUniformOffset,
&'static ViewScreenSpaceReflectionsUniformOffset,
&'static MeshViewBindGroup, &'static MeshViewBindGroup,
&'static ViewTarget, &'static ViewTarget,
&'static DeferredLightingIdDepthTexture, &'static DeferredLightingIdDepthTexture,
@ -162,6 +164,7 @@ impl ViewNode for DeferredOpaquePass3dPbrLightingNode {
view_lights_offset, view_lights_offset,
view_fog_offset, view_fog_offset,
view_light_probes_offset, view_light_probes_offset,
view_ssr_offset,
mesh_view_bind_group, mesh_view_bind_group,
target, target,
deferred_lighting_id_depth_texture, deferred_lighting_id_depth_texture,
@ -216,6 +219,7 @@ impl ViewNode for DeferredOpaquePass3dPbrLightingNode {
view_lights_offset.offset, view_lights_offset.offset,
view_fog_offset.offset, view_fog_offset.offset,
**view_light_probes_offset, **view_light_probes_offset,
**view_ssr_offset,
], ],
); );
render_pass.set_bind_group(1, &bind_group_1, &[]); render_pass.set_bind_group(1, &bind_group_1, &[]);
@ -260,7 +264,7 @@ impl SpecializedRenderPipeline for DeferredLightingLayout {
} else if method == MeshPipelineKey::TONEMAP_METHOD_REINHARD_LUMINANCE { } else if method == MeshPipelineKey::TONEMAP_METHOD_REINHARD_LUMINANCE {
shader_defs.push("TONEMAP_METHOD_REINHARD_LUMINANCE".into()); shader_defs.push("TONEMAP_METHOD_REINHARD_LUMINANCE".into());
} else if method == MeshPipelineKey::TONEMAP_METHOD_ACES_FITTED { } else if method == MeshPipelineKey::TONEMAP_METHOD_ACES_FITTED {
shader_defs.push("TONEMAP_METHOD_ACES_FITTED ".into()); shader_defs.push("TONEMAP_METHOD_ACES_FITTED".into());
} else if method == MeshPipelineKey::TONEMAP_METHOD_AGX { } else if method == MeshPipelineKey::TONEMAP_METHOD_AGX {
shader_defs.push("TONEMAP_METHOD_AGX".into()); shader_defs.push("TONEMAP_METHOD_AGX".into());
} else if method == MeshPipelineKey::TONEMAP_METHOD_SOMEWHAT_BORING_DISPLAY_TRANSFORM { } else if method == MeshPipelineKey::TONEMAP_METHOD_SOMEWHAT_BORING_DISPLAY_TRANSFORM {
@ -301,6 +305,10 @@ impl SpecializedRenderPipeline for DeferredLightingLayout {
shader_defs.push("MOTION_VECTOR_PREPASS".into()); shader_defs.push("MOTION_VECTOR_PREPASS".into());
} }
if key.contains(MeshPipelineKey::SCREEN_SPACE_REFLECTIONS) {
shader_defs.push("SCREEN_SPACE_REFLECTIONS".into());
}
// Always true, since we're in the deferred lighting pipeline // Always true, since we're in the deferred lighting pipeline
shader_defs.push("DEFERRED_PREPASS".into()); shader_defs.push("DEFERRED_PREPASS".into());
@ -406,7 +414,10 @@ pub fn prepare_deferred_lighting_pipelines(
Option<&Tonemapping>, Option<&Tonemapping>,
Option<&DebandDither>, Option<&DebandDither>,
Option<&ShadowFilteringMethod>, Option<&ShadowFilteringMethod>,
Has<ScreenSpaceAmbientOcclusionSettings>, (
Has<ScreenSpaceAmbientOcclusionSettings>,
Has<ScreenSpaceReflectionsUniform>,
),
( (
Has<NormalPrepass>, Has<NormalPrepass>,
Has<DepthPrepass>, Has<DepthPrepass>,
@ -424,7 +435,7 @@ pub fn prepare_deferred_lighting_pipelines(
tonemapping, tonemapping,
dither, dither,
shadow_filter_method, shadow_filter_method,
ssao, (ssao, ssr),
(normal_prepass, depth_prepass, motion_vector_prepass), (normal_prepass, depth_prepass, motion_vector_prepass),
has_environment_maps, has_environment_maps,
has_irradiance_volumes, has_irradiance_volumes,
@ -473,6 +484,9 @@ pub fn prepare_deferred_lighting_pipelines(
if ssao { if ssao {
view_key |= MeshPipelineKey::SCREEN_SPACE_AMBIENT_OCCLUSION; view_key |= MeshPipelineKey::SCREEN_SPACE_AMBIENT_OCCLUSION;
} }
if ssr {
view_key |= MeshPipelineKey::SCREEN_SPACE_REFLECTIONS;
}
// We don't need to check to see whether the environment map is loaded // We don't need to check to see whether the environment map is loaded
// because [`gather_light_probes`] already checked that for us before // because [`gather_light_probes`] already checked that for us before

View file

@ -34,6 +34,7 @@ mod pbr_material;
mod prepass; mod prepass;
mod render; mod render;
mod ssao; mod ssao;
mod ssr;
mod volumetric_fog; mod volumetric_fog;
use bevy_color::{Color, LinearRgba}; use bevy_color::{Color, LinearRgba};
@ -51,6 +52,7 @@ pub use pbr_material::*;
pub use prepass::*; pub use prepass::*;
pub use render::*; pub use render::*;
pub use ssao::*; pub use ssao::*;
pub use ssr::*;
pub use volumetric_fog::*; pub use volumetric_fog::*;
pub mod prelude { pub mod prelude {
@ -87,6 +89,8 @@ pub mod graph {
VolumetricFog, VolumetricFog,
/// Label for the compute shader instance data building pass. /// Label for the compute shader instance data building pass.
GpuPreprocess, GpuPreprocess,
/// Label for the screen space reflections pass.
ScreenSpaceReflections,
} }
} }
@ -319,6 +323,7 @@ impl Plugin for PbrPlugin {
use_gpu_instance_buffer_builder: self.use_gpu_instance_buffer_builder, use_gpu_instance_buffer_builder: self.use_gpu_instance_buffer_builder,
}, },
VolumetricFogPlugin, VolumetricFogPlugin,
ScreenSpaceReflectionsPlugin,
)) ))
.configure_sets( .configure_sets(
PostUpdate, PostUpdate,

View file

@ -8,7 +8,7 @@ use super::{
}; };
use crate::{ use crate::{
MeshViewBindGroup, PrepassViewBindGroup, PreviousViewUniformOffset, ViewFogUniformOffset, MeshViewBindGroup, PrepassViewBindGroup, PreviousViewUniformOffset, ViewFogUniformOffset,
ViewLightProbesUniformOffset, ViewLightsUniformOffset, ViewLightProbesUniformOffset, ViewLightsUniformOffset, ViewScreenSpaceReflectionsUniformOffset,
}; };
use bevy_core_pipeline::prepass::ViewPrepassTextures; use bevy_core_pipeline::prepass::ViewPrepassTextures;
use bevy_ecs::{query::QueryItem, world::World}; use bevy_ecs::{query::QueryItem, world::World};
@ -35,6 +35,7 @@ impl ViewNode for MeshletMainOpaquePass3dNode {
&'static ViewLightsUniformOffset, &'static ViewLightsUniformOffset,
&'static ViewFogUniformOffset, &'static ViewFogUniformOffset,
&'static ViewLightProbesUniformOffset, &'static ViewLightProbesUniformOffset,
&'static ViewScreenSpaceReflectionsUniformOffset,
&'static MeshletViewMaterialsMainOpaquePass, &'static MeshletViewMaterialsMainOpaquePass,
&'static MeshletViewBindGroups, &'static MeshletViewBindGroups,
&'static MeshletViewResources, &'static MeshletViewResources,
@ -52,6 +53,7 @@ impl ViewNode for MeshletMainOpaquePass3dNode {
view_lights_offset, view_lights_offset,
view_fog_offset, view_fog_offset,
view_light_probes_offset, view_light_probes_offset,
view_ssr_offset,
meshlet_view_materials, meshlet_view_materials,
meshlet_view_bind_groups, meshlet_view_bind_groups,
meshlet_view_resources, meshlet_view_resources,
@ -103,6 +105,7 @@ impl ViewNode for MeshletMainOpaquePass3dNode {
view_lights_offset.offset, view_lights_offset.offset,
view_fog_offset.offset, view_fog_offset.offset,
**view_light_probes_offset, **view_light_probes_offset,
**view_ssr_offset,
], ],
); );
render_pass.set_bind_group(1, meshlet_material_draw_bind_group, &[]); render_pass.set_bind_group(1, meshlet_material_draw_bind_group, &[]);

View file

@ -1342,7 +1342,8 @@ bitflags::bitflags! {
const LIGHTMAPPED = 1 << 13; const LIGHTMAPPED = 1 << 13;
const IRRADIANCE_VOLUME = 1 << 14; const IRRADIANCE_VOLUME = 1 << 14;
const VISIBILITY_RANGE_DITHER = 1 << 15; const VISIBILITY_RANGE_DITHER = 1 << 15;
const LAST_FLAG = Self::VISIBILITY_RANGE_DITHER.bits(); const SCREEN_SPACE_REFLECTIONS = 1 << 16;
const LAST_FLAG = Self::SCREEN_SPACE_REFLECTIONS.bits();
// Bitfields // Bitfields
const MSAA_RESERVED_BITS = Self::MSAA_MASK_BITS << Self::MSAA_SHIFT_BITS; const MSAA_RESERVED_BITS = Self::MSAA_MASK_BITS << Self::MSAA_SHIFT_BITS;
@ -1676,7 +1677,7 @@ impl SpecializedMeshPipeline for MeshPipeline {
} else if method == MeshPipelineKey::TONEMAP_METHOD_REINHARD_LUMINANCE { } else if method == MeshPipelineKey::TONEMAP_METHOD_REINHARD_LUMINANCE {
shader_defs.push("TONEMAP_METHOD_REINHARD_LUMINANCE".into()); shader_defs.push("TONEMAP_METHOD_REINHARD_LUMINANCE".into());
} else if method == MeshPipelineKey::TONEMAP_METHOD_ACES_FITTED { } else if method == MeshPipelineKey::TONEMAP_METHOD_ACES_FITTED {
shader_defs.push("TONEMAP_METHOD_ACES_FITTED ".into()); shader_defs.push("TONEMAP_METHOD_ACES_FITTED".into());
} else if method == MeshPipelineKey::TONEMAP_METHOD_AGX { } else if method == MeshPipelineKey::TONEMAP_METHOD_AGX {
shader_defs.push("TONEMAP_METHOD_AGX".into()); shader_defs.push("TONEMAP_METHOD_AGX".into());
} else if method == MeshPipelineKey::TONEMAP_METHOD_SOMEWHAT_BORING_DISPLAY_TRANSFORM { } else if method == MeshPipelineKey::TONEMAP_METHOD_SOMEWHAT_BORING_DISPLAY_TRANSFORM {
@ -1923,6 +1924,7 @@ impl<P: PhaseItem, const I: usize> RenderCommand<P> for SetMeshViewBindGroup<I>
Read<ViewLightsUniformOffset>, Read<ViewLightsUniformOffset>,
Read<ViewFogUniformOffset>, Read<ViewFogUniformOffset>,
Read<ViewLightProbesUniformOffset>, Read<ViewLightProbesUniformOffset>,
Read<ViewScreenSpaceReflectionsUniformOffset>,
Read<MeshViewBindGroup>, Read<MeshViewBindGroup>,
); );
type ItemQuery = (); type ItemQuery = ();
@ -1930,7 +1932,7 @@ impl<P: PhaseItem, const I: usize> RenderCommand<P> for SetMeshViewBindGroup<I>
#[inline] #[inline]
fn render<'w>( fn render<'w>(
_item: &P, _item: &P,
(view_uniform, view_lights, view_fog, view_light_probes, mesh_view_bind_group): ROQueryItem< (view_uniform, view_lights, view_fog, view_light_probes, view_ssr, mesh_view_bind_group): ROQueryItem<
'w, 'w,
Self::ViewQuery, Self::ViewQuery,
>, >,
@ -1946,6 +1948,7 @@ impl<P: PhaseItem, const I: usize> RenderCommand<P> for SetMeshViewBindGroup<I>
view_lights.offset, view_lights.offset,
view_fog.offset, view_fog.offset,
**view_light_probes, **view_light_probes,
**view_ssr,
], ],
); );

View file

@ -43,7 +43,8 @@ use crate::{
}, },
prepass, FogMeta, GlobalLightMeta, GpuFog, GpuLights, GpuPointLights, LightMeta, prepass, FogMeta, GlobalLightMeta, GpuFog, GpuLights, GpuPointLights, LightMeta,
LightProbesBuffer, LightProbesUniform, MeshPipeline, MeshPipelineKey, RenderViewLightProbes, LightProbesBuffer, LightProbesUniform, MeshPipeline, MeshPipelineKey, RenderViewLightProbes,
ScreenSpaceAmbientOcclusionTextures, ShadowSamplers, ViewClusterBindings, ViewShadowBindings, ScreenSpaceAmbientOcclusionTextures, ScreenSpaceReflectionsBuffer,
ScreenSpaceReflectionsUniform, ShadowSamplers, ViewClusterBindings, ViewShadowBindings,
CLUSTERED_FORWARD_STORAGE_BUFFER_COUNT, CLUSTERED_FORWARD_STORAGE_BUFFER_COUNT,
}; };
@ -280,9 +281,11 @@ fn layout_entries(
) )
.visibility(ShaderStages::VERTEX), .visibility(ShaderStages::VERTEX),
), ),
// Screen space reflection settings
(13, uniform_buffer::<ScreenSpaceReflectionsUniform>(true)),
// Screen space ambient occlusion texture // Screen space ambient occlusion texture
( (
13, 14,
texture_2d(TextureSampleType::Float { filterable: false }), texture_2d(TextureSampleType::Float { filterable: false }),
), ),
), ),
@ -291,9 +294,9 @@ fn layout_entries(
// EnvironmentMapLight // EnvironmentMapLight
let environment_map_entries = environment_map::get_bind_group_layout_entries(render_device); let environment_map_entries = environment_map::get_bind_group_layout_entries(render_device);
entries = entries.extend_with_indices(( entries = entries.extend_with_indices((
(14, environment_map_entries[0]), (15, environment_map_entries[0]),
(15, environment_map_entries[1]), (16, environment_map_entries[1]),
(16, environment_map_entries[2]), (17, environment_map_entries[2]),
)); ));
// Irradiance volumes // Irradiance volumes
@ -301,16 +304,16 @@ fn layout_entries(
let irradiance_volume_entries = let irradiance_volume_entries =
irradiance_volume::get_bind_group_layout_entries(render_device); irradiance_volume::get_bind_group_layout_entries(render_device);
entries = entries.extend_with_indices(( entries = entries.extend_with_indices((
(17, irradiance_volume_entries[0]), (18, irradiance_volume_entries[0]),
(18, irradiance_volume_entries[1]), (19, irradiance_volume_entries[1]),
)); ));
} }
// Tonemapping // Tonemapping
let tonemapping_lut_entries = get_lut_bind_group_layout_entries(); let tonemapping_lut_entries = get_lut_bind_group_layout_entries();
entries = entries.extend_with_indices(( entries = entries.extend_with_indices((
(19, tonemapping_lut_entries[0]), (20, tonemapping_lut_entries[0]),
(20, tonemapping_lut_entries[1]), (21, tonemapping_lut_entries[1]),
)); ));
// Prepass // Prepass
@ -320,7 +323,7 @@ fn layout_entries(
{ {
for (entry, binding) in prepass::get_bind_group_layout_entries(layout_key) for (entry, binding) in prepass::get_bind_group_layout_entries(layout_key)
.iter() .iter()
.zip([21, 22, 23, 24]) .zip([22, 23, 24, 25])
{ {
if let Some(entry) = entry { if let Some(entry) = entry {
entries = entries.extend_with_indices(((binding as u32, *entry),)); entries = entries.extend_with_indices(((binding as u32, *entry),));
@ -331,10 +334,10 @@ fn layout_entries(
// View Transmission Texture // View Transmission Texture
entries = entries.extend_with_indices(( entries = entries.extend_with_indices((
( (
25, 26,
texture_2d(TextureSampleType::Float { filterable: true }), texture_2d(TextureSampleType::Float { filterable: true }),
), ),
(26, sampler(SamplerBindingType::Filtering)), (27, sampler(SamplerBindingType::Filtering)),
)); ));
entries.to_vec() entries.to_vec()
@ -468,6 +471,7 @@ pub fn prepare_mesh_view_bind_groups(
tonemapping_luts: Res<TonemappingLuts>, tonemapping_luts: Res<TonemappingLuts>,
light_probes_buffer: Res<LightProbesBuffer>, light_probes_buffer: Res<LightProbesBuffer>,
visibility_ranges: Res<RenderVisibilityRanges>, visibility_ranges: Res<RenderVisibilityRanges>,
ssr_buffer: Res<ScreenSpaceReflectionsBuffer>,
) { ) {
if let ( if let (
Some(view_binding), Some(view_binding),
@ -477,6 +481,7 @@ pub fn prepare_mesh_view_bind_groups(
Some(fog_binding), Some(fog_binding),
Some(light_probes_binding), Some(light_probes_binding),
Some(visibility_ranges_buffer), Some(visibility_ranges_buffer),
Some(ssr_binding),
) = ( ) = (
view_uniforms.uniforms.binding(), view_uniforms.uniforms.binding(),
light_meta.view_gpu_lights.binding(), light_meta.view_gpu_lights.binding(),
@ -485,6 +490,7 @@ pub fn prepare_mesh_view_bind_groups(
fog_meta.gpu_fogs.binding(), fog_meta.gpu_fogs.binding(),
light_probes_buffer.binding(), light_probes_buffer.binding(),
visibility_ranges.buffer().buffer(), visibility_ranges.buffer().buffer(),
ssr_buffer.binding(),
) { ) {
for ( for (
entity, entity,
@ -525,7 +531,8 @@ pub fn prepare_mesh_view_bind_groups(
(10, fog_binding.clone()), (10, fog_binding.clone()),
(11, light_probes_binding.clone()), (11, light_probes_binding.clone()),
(12, visibility_ranges_buffer.as_entire_binding()), (12, visibility_ranges_buffer.as_entire_binding()),
(13, ssao_view), (13, ssr_binding.clone()),
(14, ssao_view),
)); ));
let environment_map_bind_group_entries = RenderViewEnvironmentMapBindGroupEntries::get( let environment_map_bind_group_entries = RenderViewEnvironmentMapBindGroupEntries::get(
@ -542,9 +549,9 @@ pub fn prepare_mesh_view_bind_groups(
sampler, sampler,
} => { } => {
entries = entries.extend_with_indices(( entries = entries.extend_with_indices((
(14, diffuse_texture_view), (15, diffuse_texture_view),
(15, specular_texture_view), (16, specular_texture_view),
(16, sampler), (17, sampler),
)); ));
} }
RenderViewEnvironmentMapBindGroupEntries::Multiple { RenderViewEnvironmentMapBindGroupEntries::Multiple {
@ -553,9 +560,9 @@ pub fn prepare_mesh_view_bind_groups(
sampler, sampler,
} => { } => {
entries = entries.extend_with_indices(( entries = entries.extend_with_indices((
(14, diffuse_texture_views.as_slice()), (15, diffuse_texture_views.as_slice()),
(15, specular_texture_views.as_slice()), (16, specular_texture_views.as_slice()),
(16, sampler), (17, sampler),
)); ));
} }
} }
@ -576,21 +583,21 @@ pub fn prepare_mesh_view_bind_groups(
texture_view, texture_view,
sampler, sampler,
}) => { }) => {
entries = entries.extend_with_indices(((17, texture_view), (18, sampler))); entries = entries.extend_with_indices(((18, texture_view), (19, sampler)));
} }
Some(RenderViewIrradianceVolumeBindGroupEntries::Multiple { Some(RenderViewIrradianceVolumeBindGroupEntries::Multiple {
ref texture_views, ref texture_views,
sampler, sampler,
}) => { }) => {
entries = entries entries = entries
.extend_with_indices(((17, texture_views.as_slice()), (18, sampler))); .extend_with_indices(((18, texture_views.as_slice()), (19, sampler)));
} }
None => {} None => {}
} }
let lut_bindings = let lut_bindings =
get_lut_bindings(&images, &tonemapping_luts, tonemapping, &fallback_image); get_lut_bindings(&images, &tonemapping_luts, tonemapping, &fallback_image);
entries = entries.extend_with_indices(((19, lut_bindings.0), (20, lut_bindings.1))); entries = entries.extend_with_indices(((20, lut_bindings.0), (21, lut_bindings.1)));
// When using WebGL, we can't have a depth texture with multisampling // When using WebGL, we can't have a depth texture with multisampling
let prepass_bindings; let prepass_bindings;
@ -600,7 +607,7 @@ pub fn prepare_mesh_view_bind_groups(
for (binding, index) in prepass_bindings for (binding, index) in prepass_bindings
.iter() .iter()
.map(Option::as_ref) .map(Option::as_ref)
.zip([21, 22, 23, 24]) .zip([22, 23, 24, 25])
.flat_map(|(b, i)| b.map(|b| (b, i))) .flat_map(|(b, i)| b.map(|b| (b, i)))
{ {
entries = entries.extend_with_indices(((index, binding),)); entries = entries.extend_with_indices(((index, binding),));
@ -616,7 +623,7 @@ pub fn prepare_mesh_view_bind_groups(
.unwrap_or(&fallback_image_zero.sampler); .unwrap_or(&fallback_image_zero.sampler);
entries = entries =
entries.extend_with_indices(((25, transmission_view), (26, transmission_sampler))); entries.extend_with_indices(((26, transmission_view), (27, transmission_sampler)));
commands.entity(entity).insert(MeshViewBindGroup { commands.entity(entity).insert(MeshViewBindGroup {
value: render_device.create_bind_group("mesh_view_bind_group", layout, &entries), value: render_device.create_bind_group("mesh_view_bind_group", layout, &entries),

View file

@ -42,58 +42,59 @@ const VISIBILITY_RANGE_UNIFORM_BUFFER_SIZE: u32 = 64u;
@group(0) @binding(12) var<uniform> visibility_ranges: array<vec4<f32>, VISIBILITY_RANGE_UNIFORM_BUFFER_SIZE>; @group(0) @binding(12) var<uniform> visibility_ranges: array<vec4<f32>, VISIBILITY_RANGE_UNIFORM_BUFFER_SIZE>;
#endif #endif
@group(0) @binding(13) var screen_space_ambient_occlusion_texture: texture_2d<f32>; @group(0) @binding(13) var<uniform> ssr_settings: types::ScreenSpaceReflectionsSettings;
@group(0) @binding(14) var screen_space_ambient_occlusion_texture: texture_2d<f32>;
#ifdef MULTIPLE_LIGHT_PROBES_IN_ARRAY #ifdef MULTIPLE_LIGHT_PROBES_IN_ARRAY
@group(0) @binding(14) var diffuse_environment_maps: binding_array<texture_cube<f32>, 8u>; @group(0) @binding(15) var diffuse_environment_maps: binding_array<texture_cube<f32>, 8u>;
@group(0) @binding(15) var specular_environment_maps: binding_array<texture_cube<f32>, 8u>; @group(0) @binding(16) var specular_environment_maps: binding_array<texture_cube<f32>, 8u>;
#else #else
@group(0) @binding(14) var diffuse_environment_map: texture_cube<f32>; @group(0) @binding(15) var diffuse_environment_map: texture_cube<f32>;
@group(0) @binding(15) var specular_environment_map: texture_cube<f32>; @group(0) @binding(16) var specular_environment_map: texture_cube<f32>;
#endif #endif
@group(0) @binding(16) var environment_map_sampler: sampler; @group(0) @binding(17) var environment_map_sampler: sampler;
#ifdef IRRADIANCE_VOLUMES_ARE_USABLE #ifdef IRRADIANCE_VOLUMES_ARE_USABLE
#ifdef MULTIPLE_LIGHT_PROBES_IN_ARRAY #ifdef MULTIPLE_LIGHT_PROBES_IN_ARRAY
@group(0) @binding(17) var irradiance_volumes: binding_array<texture_3d<f32>, 8u>; @group(0) @binding(18) var irradiance_volumes: binding_array<texture_3d<f32>, 8u>;
#else #else
@group(0) @binding(17) var irradiance_volume: texture_3d<f32>; @group(0) @binding(18) var irradiance_volume: texture_3d<f32>;
#endif #endif
@group(0) @binding(18) var irradiance_volume_sampler: sampler; @group(0) @binding(19) var irradiance_volume_sampler: sampler;
#endif #endif
// NB: If you change these, make sure to update `tonemapping_shared.wgsl` too. // NB: If you change these, make sure to update `tonemapping_shared.wgsl` too.
@group(0) @binding(19) var dt_lut_texture: texture_3d<f32>; @group(0) @binding(20) var dt_lut_texture: texture_3d<f32>;
@group(0) @binding(20) var dt_lut_sampler: sampler; @group(0) @binding(21) var dt_lut_sampler: sampler;
#ifdef MULTISAMPLED #ifdef MULTISAMPLED
#ifdef DEPTH_PREPASS #ifdef DEPTH_PREPASS
@group(0) @binding(21) var depth_prepass_texture: texture_depth_multisampled_2d; @group(0) @binding(22) var depth_prepass_texture: texture_depth_multisampled_2d;
#endif // DEPTH_PREPASS #endif // DEPTH_PREPASS
#ifdef NORMAL_PREPASS #ifdef NORMAL_PREPASS
@group(0) @binding(22) var normal_prepass_texture: texture_multisampled_2d<f32>; @group(0) @binding(23) var normal_prepass_texture: texture_multisampled_2d<f32>;
#endif // NORMAL_PREPASS #endif // NORMAL_PREPASS
#ifdef MOTION_VECTOR_PREPASS #ifdef MOTION_VECTOR_PREPASS
@group(0) @binding(23) var motion_vector_prepass_texture: texture_multisampled_2d<f32>; @group(0) @binding(24) var motion_vector_prepass_texture: texture_multisampled_2d<f32>;
#endif // MOTION_VECTOR_PREPASS #endif // MOTION_VECTOR_PREPASS
#else // MULTISAMPLED #else // MULTISAMPLED
#ifdef DEPTH_PREPASS #ifdef DEPTH_PREPASS
@group(0) @binding(21) var depth_prepass_texture: texture_depth_2d; @group(0) @binding(22) var depth_prepass_texture: texture_depth_2d;
#endif // DEPTH_PREPASS #endif // DEPTH_PREPASS
#ifdef NORMAL_PREPASS #ifdef NORMAL_PREPASS
@group(0) @binding(22) var normal_prepass_texture: texture_2d<f32>; @group(0) @binding(23) var normal_prepass_texture: texture_2d<f32>;
#endif // NORMAL_PREPASS #endif // NORMAL_PREPASS
#ifdef MOTION_VECTOR_PREPASS #ifdef MOTION_VECTOR_PREPASS
@group(0) @binding(23) var motion_vector_prepass_texture: texture_2d<f32>; @group(0) @binding(24) var motion_vector_prepass_texture: texture_2d<f32>;
#endif // MOTION_VECTOR_PREPASS #endif // MOTION_VECTOR_PREPASS
#endif // MULTISAMPLED #endif // MULTISAMPLED
#ifdef DEFERRED_PREPASS #ifdef DEFERRED_PREPASS
@group(0) @binding(24) var deferred_prepass_texture: texture_2d<u32>; @group(0) @binding(25) var deferred_prepass_texture: texture_2d<u32>;
#endif // DEFERRED_PREPASS #endif // DEFERRED_PREPASS
@group(0) @binding(25) var view_transmission_texture: texture_2d<f32>; @group(0) @binding(26) var view_transmission_texture: texture_2d<f32>;
@group(0) @binding(26) var view_transmission_sampler: sampler; @group(0) @binding(27) var view_transmission_sampler: sampler;

View file

@ -135,3 +135,16 @@ struct LightProbes {
// The intensity of the environment map associated with the view. // The intensity of the environment map associated with the view.
intensity_for_view: f32, intensity_for_view: f32,
}; };
// Settings for screen space reflections.
//
// For more information on these settings, see the documentation for
// `bevy_pbr::ssr::ScreenSpaceReflectionsSettings`.
struct ScreenSpaceReflectionsSettings {
perceptual_roughness_threshold: f32,
thickness: f32,
linear_steps: u32,
linear_march_exponent: f32,
bisection_steps: u32,
use_secant: u32,
};

View file

@ -218,6 +218,23 @@ fn calculate_view(
return V; return V;
} }
// Diffuse strength is inversely related to metallicity, specular and diffuse transmission
fn calculate_diffuse_color(
base_color: vec3<f32>,
metallic: f32,
specular_transmission: f32,
diffuse_transmission: f32
) -> vec3<f32> {
return base_color * (1.0 - metallic) * (1.0 - specular_transmission) *
(1.0 - diffuse_transmission);
}
// Remapping [0,1] reflectance to F0
// See https://google.github.io/filament/Filament.html#materialsystem/parameterization/remapping
fn calculate_F0(base_color: vec3<f32>, metallic: f32, reflectance: f32) -> vec3<f32> {
return 0.16 * reflectance * reflectance * (1.0 - metallic) + base_color * metallic;
}
#ifndef PREPASS_FRAGMENT #ifndef PREPASS_FRAGMENT
fn apply_pbr_lighting( fn apply_pbr_lighting(
in: pbr_types::PbrInput, in: pbr_types::PbrInput,
@ -232,6 +249,7 @@ fn apply_pbr_lighting(
let roughness = lighting::perceptualRoughnessToRoughness(perceptual_roughness); let roughness = lighting::perceptualRoughnessToRoughness(perceptual_roughness);
let ior = in.material.ior; let ior = in.material.ior;
let thickness = in.material.thickness; let thickness = in.material.thickness;
let reflectance = in.material.reflectance;
let diffuse_transmission = in.material.diffuse_transmission; let diffuse_transmission = in.material.diffuse_transmission;
let specular_transmission = in.material.specular_transmission; let specular_transmission = in.material.specular_transmission;
@ -255,8 +273,12 @@ fn apply_pbr_lighting(
let clearcoat_R = reflect(-in.V, clearcoat_N); let clearcoat_R = reflect(-in.V, clearcoat_N);
#endif // STANDARD_MATERIAL_CLEARCOAT #endif // STANDARD_MATERIAL_CLEARCOAT
// Diffuse strength is inversely related to metallicity, specular and diffuse transmission let diffuse_color = calculate_diffuse_color(
let diffuse_color = output_color.rgb * (1.0 - metallic) * (1.0 - specular_transmission) * (1.0 - diffuse_transmission); output_color.rgb,
metallic,
specular_transmission,
diffuse_transmission
);
// Diffuse transmissive strength is inversely related to metallicity and specular transmission, but directly related to diffuse transmission // Diffuse transmissive strength is inversely related to metallicity and specular transmission, but directly related to diffuse transmission
let diffuse_transmissive_color = output_color.rgb * (1.0 - metallic) * (1.0 - specular_transmission) * diffuse_transmission; let diffuse_transmissive_color = output_color.rgb * (1.0 - metallic) * (1.0 - specular_transmission) * diffuse_transmission;
@ -264,7 +286,7 @@ fn apply_pbr_lighting(
// Calculate the world position of the second Lambertian lobe used for diffuse transmission, by subtracting material thickness // Calculate the world position of the second Lambertian lobe used for diffuse transmission, by subtracting material thickness
let diffuse_transmissive_lobe_world_position = in.world_position - vec4<f32>(in.world_normal, 0.0) * thickness; let diffuse_transmissive_lobe_world_position = in.world_position - vec4<f32>(in.world_normal, 0.0) * thickness;
let F0 = lighting::F0(in.material.reflectance, metallic, output_color.rgb); let F0 = calculate_F0(output_color.rgb, metallic, reflectance);
let F_ab = lighting::F_AB(perceptual_roughness, NdotV); let F_ab = lighting::F_AB(perceptual_roughness, NdotV);
var direct_light: vec3<f32> = vec3<f32>(0.0); var direct_light: vec3<f32> = vec3<f32>(0.0);
@ -439,8 +461,6 @@ fn apply_pbr_lighting(
#endif #endif
} }
var indirect_light = vec3(0.0f);
#ifdef STANDARD_MATERIAL_DIFFUSE_TRANSMISSION #ifdef STANDARD_MATERIAL_DIFFUSE_TRANSMISSION
// NOTE: We use the diffuse transmissive color, the second Lambertian lobe's calculated // NOTE: We use the diffuse transmissive color, the second Lambertian lobe's calculated
// world position, inverted normal and view vectors, and the following simplified // world position, inverted normal and view vectors, and the following simplified
@ -464,6 +484,8 @@ fn apply_pbr_lighting(
// any more diffuse indirect light. This avoids double-counting if, for // any more diffuse indirect light. This avoids double-counting if, for
// example, both lightmaps and irradiance volumes are present. // example, both lightmaps and irradiance volumes are present.
var indirect_light = vec3(0.0f);
#ifdef LIGHTMAP #ifdef LIGHTMAP
if (all(indirect_light == vec3(0.0f))) { if (all(indirect_light == vec3(0.0f))) {
indirect_light += in.lightmap_light * diffuse_color; indirect_light += in.lightmap_light * diffuse_color;
@ -480,20 +502,38 @@ fn apply_pbr_lighting(
#endif #endif
// Environment map light (indirect) // Environment map light (indirect)
//
// Note that up until this point, we have only accumulated diffuse light.
// This call is the first call that can accumulate specular light.
#ifdef ENVIRONMENT_MAP #ifdef ENVIRONMENT_MAP
let environment_light =
environment_map::environment_map_light(&lighting_input, any(indirect_light != vec3(0.0f)));
indirect_light += environment_light.diffuse * diffuse_occlusion + // If screen space reflections are going to be used for this material, don't
environment_light.specular * specular_occlusion; // accumulate environment map light yet. The SSR shader will do it.
#ifdef SCREEN_SPACE_REFLECTIONS
let use_ssr = perceptual_roughness <=
view_bindings::ssr_settings.perceptual_roughness_threshold;
#else // SCREEN_SPACE_REFLECTIONS
let use_ssr = false;
#endif // SCREEN_SPACE_REFLECTIONS
if (!use_ssr) {
let environment_light = environment_map::environment_map_light(
&lighting_input,
any(indirect_light != vec3(0.0f))
);
indirect_light += environment_light.diffuse * diffuse_occlusion +
environment_light.specular * specular_occlusion;
}
#endif // ENVIRONMENT_MAP
// Ambient light (indirect)
indirect_light += ambient::ambient_light(in.world_position, in.N, in.V, NdotV, diffuse_color, F0, perceptual_roughness, diffuse_occlusion);
// we'll use the specular component of the transmitted environment // we'll use the specular component of the transmitted environment
// light in the call to `specular_transmissive_light()` below // light in the call to `specular_transmissive_light()` below
var specular_transmitted_environment_light = vec3<f32>(0.0); var specular_transmitted_environment_light = vec3<f32>(0.0);
#ifdef ENVIRONMENT_MAP
#ifdef STANDARD_MATERIAL_DIFFUSE_OR_SPECULAR_TRANSMISSION #ifdef STANDARD_MATERIAL_DIFFUSE_OR_SPECULAR_TRANSMISSION
// NOTE: We use the diffuse transmissive color, inverted normal and view vectors, // NOTE: We use the diffuse transmissive color, inverted normal and view vectors,
// and the following simplified values for the transmitted environment light contribution // and the following simplified values for the transmitted environment light contribution
@ -539,19 +579,14 @@ fn apply_pbr_lighting(
#ifdef STANDARD_MATERIAL_DIFFUSE_TRANSMISSION #ifdef STANDARD_MATERIAL_DIFFUSE_TRANSMISSION
transmitted_light += transmitted_environment_light.diffuse * diffuse_transmissive_color; transmitted_light += transmitted_environment_light.diffuse * diffuse_transmissive_color;
#endif #endif // STANDARD_MATERIAL_DIFFUSE_TRANSMISSION
#ifdef STANDARD_MATERIAL_SPECULAR_TRANSMISSION #ifdef STANDARD_MATERIAL_SPECULAR_TRANSMISSION
specular_transmitted_environment_light = transmitted_environment_light.specular * specular_transmissive_color; specular_transmitted_environment_light = transmitted_environment_light.specular * specular_transmissive_color;
#endif #endif // STANDARD_MATERIAL_SPECULAR_TRANSMISSION
#endif // STANDARD_MATERIAL_DIFFUSE_OR_SPECULAR_TRANSMISSION
#else
// If there's no environment map light, there's no transmitted environment
// light specular component, so we can just hardcode it to zero.
let specular_transmitted_environment_light = vec3<f32>(0.0);
#endif
// Ambient light (indirect) #endif // STANDARD_MATERIAL_SPECULAR_OR_DIFFUSE_TRANSMISSION
indirect_light += ambient::ambient_light(in.world_position, in.N, in.V, NdotV, diffuse_color, F0, perceptual_roughness, diffuse_occlusion);
#endif // ENVIRONMENT_MAP
var emissive_light = emissive.rgb * output_color.a; var emissive_light = emissive.rgb * output_color.a;

View file

@ -305,12 +305,6 @@ fn Fd_Burley(
return lightScatter * viewScatter * (1.0 / PI); return lightScatter * viewScatter * (1.0 / PI);
} }
// Remapping [0,1] reflectance to F0
// See https://google.github.io/filament/Filament.html#materialsystem/parameterization/remapping
fn F0(reflectance: f32, metallic: f32, color: vec3<f32>) -> vec3<f32> {
return 0.16 * reflectance * reflectance * (1.0 - metallic) + color * metallic;
}
// Scale/bias approximation // Scale/bias approximation
// https://www.unrealengine.com/en-US/blog/physically-based-shading-on-mobile // https://www.unrealengine.com/en-US/blog/physically-based-shading-on-mobile
// TODO: Use a LUT (more accurate) // TODO: Use a LUT (more accurate)

View file

@ -95,7 +95,7 @@ struct PbrInput {
material: StandardMaterial, material: StandardMaterial,
// Note: this gets monochromized upon deferred PbrInput reconstruction. // Note: this gets monochromized upon deferred PbrInput reconstruction.
diffuse_occlusion: vec3<f32>, diffuse_occlusion: vec3<f32>,
// Note: this is 1.0 (entirely unoccluded) when SSAO is off. // Note: this is 1.0 (entirely unoccluded) when SSAO and SSR are off.
specular_occlusion: f32, specular_occlusion: f32,
frag_coord: vec4<f32>, frag_coord: vec4<f32>,
world_position: vec4<f32>, world_position: vec4<f32>,

View file

@ -196,3 +196,9 @@ fn frag_coord_to_uv(frag_coord: vec2<f32>) -> vec2<f32> {
fn frag_coord_to_ndc(frag_coord: vec4<f32>) -> vec3<f32> { fn frag_coord_to_ndc(frag_coord: vec4<f32>) -> vec3<f32> {
return vec3(uv_to_ndc(frag_coord_to_uv(frag_coord.xy)), frag_coord.z); return vec3(uv_to_ndc(frag_coord_to_uv(frag_coord.xy)), frag_coord.z);
} }
/// Convert ndc space xy coordinate [-1.0 .. 1.0] to [0 .. render target
/// viewport size]
fn ndc_to_frag_coord(ndc: vec2<f32>) -> vec2<f32> {
return ndc_to_uv(ndc) * view_bindings::view.viewport.zw;
}

View file

@ -0,0 +1,563 @@
//! Screen space reflections implemented via raymarching.
use bevy_app::{App, Plugin};
use bevy_asset::{load_internal_asset, Handle};
use bevy_core_pipeline::{
core_3d::{
graph::{Core3d, Node3d},
DEPTH_TEXTURE_SAMPLING_SUPPORTED,
},
fullscreen_vertex_shader,
prepass::{DeferredPrepass, DepthPrepass, MotionVectorPrepass, NormalPrepass},
};
use bevy_derive::{Deref, DerefMut};
use bevy_ecs::{
bundle::Bundle,
component::Component,
entity::Entity,
query::{Has, QueryItem, With},
reflect::ReflectComponent,
schedule::IntoSystemConfigs as _,
system::{lifetimeless::Read, Commands, Query, Res, ResMut, Resource},
world::{FromWorld, World},
};
use bevy_reflect::{std_traits::ReflectDefault, Reflect};
use bevy_render::{
extract_component::{ExtractComponent, ExtractComponentPlugin},
render_graph::{NodeRunError, RenderGraphApp, RenderGraphContext, ViewNode, ViewNodeRunner},
render_resource::{
binding_types, AddressMode, BindGroupEntries, BindGroupLayout, BindGroupLayoutEntries,
CachedRenderPipelineId, ColorTargetState, ColorWrites, DynamicUniformBuffer, FilterMode,
FragmentState, Operations, PipelineCache, RenderPassColorAttachment, RenderPassDescriptor,
RenderPipelineDescriptor, Sampler, SamplerBindingType, SamplerDescriptor, Shader,
ShaderStages, ShaderType, SpecializedRenderPipeline, SpecializedRenderPipelines,
TextureFormat, TextureSampleType,
},
renderer::{RenderContext, RenderDevice, RenderQueue},
texture::BevyDefault as _,
view::{ExtractedView, Msaa, ViewTarget, ViewUniformOffset},
Render, RenderApp, RenderSet,
};
use bevy_utils::{info_once, prelude::default};
use crate::{
binding_arrays_are_usable, graph::NodePbr, prelude::EnvironmentMapLight,
MeshPipelineViewLayoutKey, MeshPipelineViewLayouts, MeshViewBindGroup, RenderViewLightProbes,
ViewFogUniformOffset, ViewLightProbesUniformOffset, ViewLightsUniformOffset,
};
const SSR_SHADER_HANDLE: Handle<Shader> = Handle::weak_from_u128(10438925299917978850);
const RAYMARCH_SHADER_HANDLE: Handle<Shader> = Handle::weak_from_u128(8517409683450840946);
/// Enables screen-space reflections for a camera.
///
/// Screen-space reflections are currently only supported with deferred rendering.
pub struct ScreenSpaceReflectionsPlugin;
/// A convenient bundle to add screen space reflections to a camera, along with
/// the depth and deferred prepasses required to enable them.
#[derive(Bundle, Default)]
pub struct ScreenSpaceReflectionsBundle {
/// The component that enables SSR.
pub settings: ScreenSpaceReflectionsSettings,
/// The depth prepass, needed for SSR.
pub depth_prepass: DepthPrepass,
/// The deferred prepass, needed for SSR.
pub deferred_prepass: DeferredPrepass,
}
/// Add this component to a camera to enable *screen-space reflections* (SSR).
///
/// Screen-space reflections currently require deferred rendering in order to
/// appear. Therefore, you'll generally need to add a [`DepthPrepass`] and a
/// [`DeferredPrepass`] to the camera as well.
///
/// SSR currently performs no roughness filtering for glossy reflections, so
/// only very smooth surfaces will reflect objects in screen space. You can
/// adjust the `perceptual_roughness_threshold` in order to tune the threshold
/// below which screen-space reflections will be traced.
///
/// As with all screen-space techniques, SSR can only reflect objects on screen.
/// When objects leave the camera, they will disappear from reflections.
/// Alternatives that don't suffer from this problem include
/// [`crate::environment_map::ReflectionProbeBundle`]s. The advantage of SSR is
/// that it can reflect all objects, not just static ones.
///
/// SSR is an approximation technique and produces artifacts in some situations.
/// Hand-tuning the settings in this component will likely be useful.
///
/// Screen-space reflections are presently unsupported on WebGL 2 because of a
/// bug whereby Naga doesn't generate correct GLSL when sampling depth buffers,
/// which is required for screen-space raymarching.
#[derive(Clone, Copy, Component, Reflect)]
#[reflect(Component, Default)]
pub struct ScreenSpaceReflectionsSettings {
/// The maximum PBR roughness level that will enable screen space
/// reflections.
pub perceptual_roughness_threshold: f32,
/// When marching the depth buffer, we only have 2.5D information and don't
/// know how thick surfaces are. We shall assume that the depth buffer
/// fragments are cuboids with a constant thickness defined by this
/// parameter.
pub thickness: f32,
/// The number of steps to be taken at regular intervals to find an initial
/// intersection. Must not be zero.
///
/// Higher values result in higher-quality reflections, because the
/// raymarching shader is less likely to miss objects. However, they take
/// more GPU time.
pub linear_steps: u32,
/// Exponent to be applied in the linear part of the march.
///
/// A value of 1.0 will result in equidistant steps, and higher values will
/// compress the earlier steps, and expand the later ones. This might be
/// desirable in order to get more detail close to objects.
///
/// For optimal performance, this should be a small unsigned integer, such
/// as 1 or 2.
pub linear_march_exponent: f32,
/// Number of steps in a bisection (binary search) to perform once the
/// linear search has found an intersection. Helps narrow down the hit,
/// increasing the chance of the secant method finding an accurate hit
/// point.
pub bisection_steps: u32,
/// Approximate the root position using the secant method—by solving for
/// line-line intersection between the ray approach rate and the surface
/// gradient.
pub use_secant: bool,
}
/// A version of [`ScreenSpaceReflectionsSettings`] for upload to the GPU.
///
/// For more information on these fields, see the corresponding documentation in
/// [`ScreenSpaceReflectionsSettings`].
#[derive(Clone, Copy, Component, ShaderType)]
pub struct ScreenSpaceReflectionsUniform {
perceptual_roughness_threshold: f32,
thickness: f32,
linear_steps: u32,
linear_march_exponent: f32,
bisection_steps: u32,
/// A boolean converted to a `u32`.
use_secant: u32,
}
/// The node in the render graph that traces screen space reflections.
#[derive(Default)]
pub struct ScreenSpaceReflectionsNode;
/// Identifies which screen space reflections render pipeline a view needs.
#[derive(Component, Deref, DerefMut)]
pub struct ScreenSpaceReflectionsPipelineId(pub CachedRenderPipelineId);
/// Information relating to the render pipeline for the screen space reflections
/// shader.
#[derive(Resource)]
pub struct ScreenSpaceReflectionsPipeline {
mesh_view_layouts: MeshPipelineViewLayouts,
color_sampler: Sampler,
depth_linear_sampler: Sampler,
depth_nearest_sampler: Sampler,
bind_group_layout: BindGroupLayout,
binding_arrays_are_usable: bool,
}
/// A GPU buffer that stores the screen space reflection settings for each view.
#[derive(Resource, Default, Deref, DerefMut)]
pub struct ScreenSpaceReflectionsBuffer(pub DynamicUniformBuffer<ScreenSpaceReflectionsUniform>);
/// A component that stores the offset within the
/// [`ScreenSpaceReflectionsBuffer`] for each view.
#[derive(Component, Default, Deref, DerefMut)]
pub struct ViewScreenSpaceReflectionsUniformOffset(u32);
/// Identifies a specific configuration of the SSR pipeline shader.
#[derive(Clone, Copy, PartialEq, Eq, Hash)]
pub struct ScreenSpaceReflectionsPipelineKey {
mesh_pipeline_view_key: MeshPipelineViewLayoutKey,
is_hdr: bool,
has_environment_maps: bool,
}
impl Plugin for ScreenSpaceReflectionsPlugin {
fn build(&self, app: &mut App) {
load_internal_asset!(app, SSR_SHADER_HANDLE, "ssr.wgsl", Shader::from_wgsl);
load_internal_asset!(
app,
RAYMARCH_SHADER_HANDLE,
"raymarch.wgsl",
Shader::from_wgsl
);
app.register_type::<ScreenSpaceReflectionsSettings>()
.add_plugins(ExtractComponentPlugin::<ScreenSpaceReflectionsSettings>::default());
let Some(render_app) = app.get_sub_app_mut(RenderApp) else {
return;
};
render_app
.init_resource::<ScreenSpaceReflectionsBuffer>()
.add_systems(Render, prepare_ssr_pipelines.in_set(RenderSet::Prepare))
.add_systems(
Render,
prepare_ssr_settings.in_set(RenderSet::PrepareResources),
)
.add_render_graph_node::<ViewNodeRunner<ScreenSpaceReflectionsNode>>(
Core3d,
NodePbr::ScreenSpaceReflections,
);
}
fn finish(&self, app: &mut App) {
let Some(render_app) = app.get_sub_app_mut(RenderApp) else {
return;
};
render_app
.init_resource::<ScreenSpaceReflectionsPipeline>()
.init_resource::<SpecializedRenderPipelines<ScreenSpaceReflectionsPipeline>>()
.add_render_graph_edges(
Core3d,
(
NodePbr::DeferredLightingPass,
NodePbr::ScreenSpaceReflections,
Node3d::MainOpaquePass,
),
);
}
}
impl Default for ScreenSpaceReflectionsSettings {
// Reasonable default values.
//
// These are from
// <https://gist.github.com/h3r2tic/9c8356bdaefbe80b1a22ae0aaee192db?permalink_comment_id=4552149#gistcomment-4552149>.
fn default() -> Self {
Self {
perceptual_roughness_threshold: 0.1,
linear_steps: 16,
bisection_steps: 4,
use_secant: true,
thickness: 0.25,
linear_march_exponent: 1.0,
}
}
}
impl ViewNode for ScreenSpaceReflectionsNode {
type ViewQuery = (
Read<ViewTarget>,
Read<ViewUniformOffset>,
Read<ViewLightsUniformOffset>,
Read<ViewFogUniformOffset>,
Read<ViewLightProbesUniformOffset>,
Read<ViewScreenSpaceReflectionsUniformOffset>,
Read<MeshViewBindGroup>,
Read<ScreenSpaceReflectionsPipelineId>,
);
fn run<'w>(
&self,
_: &mut RenderGraphContext,
render_context: &mut RenderContext<'w>,
(
view_target,
view_uniform_offset,
view_lights_offset,
view_fog_offset,
view_light_probes_offset,
view_ssr_offset,
view_bind_group,
ssr_pipeline_id,
): QueryItem<'w, Self::ViewQuery>,
world: &'w World,
) -> Result<(), NodeRunError> {
// Grab the render pipeline.
let pipeline_cache = world.resource::<PipelineCache>();
let Some(render_pipeline) = pipeline_cache.get_render_pipeline(**ssr_pipeline_id) else {
return Ok(());
};
// Set up a standard pair of postprocessing textures.
let postprocess = view_target.post_process_write();
// Create the bind group for this view.
let ssr_pipeline = world.resource::<ScreenSpaceReflectionsPipeline>();
let ssr_bind_group = render_context.render_device().create_bind_group(
"SSR bind group",
&ssr_pipeline.bind_group_layout,
&BindGroupEntries::sequential((
postprocess.source,
&ssr_pipeline.color_sampler,
&ssr_pipeline.depth_linear_sampler,
&ssr_pipeline.depth_nearest_sampler,
)),
);
// Build the SSR render pass.
let mut render_pass = render_context.begin_tracked_render_pass(RenderPassDescriptor {
label: Some("SSR pass"),
color_attachments: &[Some(RenderPassColorAttachment {
view: postprocess.destination,
resolve_target: None,
ops: Operations::default(),
})],
depth_stencil_attachment: None,
timestamp_writes: None,
occlusion_query_set: None,
});
// Set bind groups.
render_pass.set_render_pipeline(render_pipeline);
render_pass.set_bind_group(
0,
&view_bind_group.value,
&[
view_uniform_offset.offset,
view_lights_offset.offset,
view_fog_offset.offset,
**view_light_probes_offset,
**view_ssr_offset,
],
);
// Perform the SSR render pass.
render_pass.set_bind_group(1, &ssr_bind_group, &[]);
render_pass.draw(0..3, 0..1);
Ok(())
}
}
impl FromWorld for ScreenSpaceReflectionsPipeline {
fn from_world(world: &mut World) -> Self {
let mesh_view_layouts = world.resource::<MeshPipelineViewLayouts>().clone();
let render_device = world.resource::<RenderDevice>();
// Create the bind group layout.
let bind_group_layout = render_device.create_bind_group_layout(
"SSR bind group layout",
&BindGroupLayoutEntries::sequential(
ShaderStages::FRAGMENT,
(
binding_types::texture_2d(TextureSampleType::Float { filterable: true }),
binding_types::sampler(SamplerBindingType::Filtering),
binding_types::sampler(SamplerBindingType::Filtering),
binding_types::sampler(SamplerBindingType::NonFiltering),
),
),
);
// Create the samplers we need.
let color_sampler = render_device.create_sampler(&SamplerDescriptor {
label: "SSR color sampler".into(),
address_mode_u: AddressMode::ClampToEdge,
address_mode_v: AddressMode::ClampToEdge,
mag_filter: FilterMode::Linear,
min_filter: FilterMode::Linear,
..default()
});
let depth_linear_sampler = render_device.create_sampler(&SamplerDescriptor {
label: "SSR depth linear sampler".into(),
address_mode_u: AddressMode::ClampToEdge,
address_mode_v: AddressMode::ClampToEdge,
mag_filter: FilterMode::Linear,
min_filter: FilterMode::Linear,
..default()
});
let depth_nearest_sampler = render_device.create_sampler(&SamplerDescriptor {
label: "SSR depth nearest sampler".into(),
address_mode_u: AddressMode::ClampToEdge,
address_mode_v: AddressMode::ClampToEdge,
mag_filter: FilterMode::Nearest,
min_filter: FilterMode::Nearest,
..default()
});
Self {
mesh_view_layouts,
color_sampler,
depth_linear_sampler,
depth_nearest_sampler,
bind_group_layout,
binding_arrays_are_usable: binding_arrays_are_usable(render_device),
}
}
}
/// Sets up screen space reflection pipelines for each applicable view.
pub fn prepare_ssr_pipelines(
mut commands: Commands,
pipeline_cache: Res<PipelineCache>,
mut pipelines: ResMut<SpecializedRenderPipelines<ScreenSpaceReflectionsPipeline>>,
ssr_pipeline: Res<ScreenSpaceReflectionsPipeline>,
views: Query<
(
Entity,
&ExtractedView,
Has<RenderViewLightProbes<EnvironmentMapLight>>,
Has<NormalPrepass>,
Has<MotionVectorPrepass>,
),
(
With<ScreenSpaceReflectionsUniform>,
With<DepthPrepass>,
With<DeferredPrepass>,
),
>,
) {
for (
entity,
extracted_view,
has_environment_maps,
has_normal_prepass,
has_motion_vector_prepass,
) in &views
{
// SSR is only supported in the deferred pipeline, which has no MSAA
// support. Thus we can assume MSAA is off.
let mut mesh_pipeline_view_key = MeshPipelineViewLayoutKey::from(Msaa::Off)
| MeshPipelineViewLayoutKey::DEPTH_PREPASS
| MeshPipelineViewLayoutKey::DEFERRED_PREPASS;
mesh_pipeline_view_key.set(
MeshPipelineViewLayoutKey::NORMAL_PREPASS,
has_normal_prepass,
);
mesh_pipeline_view_key.set(
MeshPipelineViewLayoutKey::MOTION_VECTOR_PREPASS,
has_motion_vector_prepass,
);
// Build the pipeline.
let pipeline_id = pipelines.specialize(
&pipeline_cache,
&ssr_pipeline,
ScreenSpaceReflectionsPipelineKey {
mesh_pipeline_view_key,
is_hdr: extracted_view.hdr,
has_environment_maps,
},
);
// Note which pipeline ID was used.
commands
.entity(entity)
.insert(ScreenSpaceReflectionsPipelineId(pipeline_id));
}
}
/// Gathers up screen space reflection settings for each applicable view and
/// writes them into a GPU buffer.
pub fn prepare_ssr_settings(
mut commands: Commands,
views: Query<(Entity, Option<&ScreenSpaceReflectionsUniform>), With<ExtractedView>>,
mut ssr_settings_buffer: ResMut<ScreenSpaceReflectionsBuffer>,
render_device: Res<RenderDevice>,
render_queue: Res<RenderQueue>,
) {
let Some(mut writer) =
ssr_settings_buffer.get_writer(views.iter().len(), &render_device, &render_queue)
else {
return;
};
for (view, ssr_uniform) in views.iter() {
let uniform_offset = match ssr_uniform {
None => 0,
Some(ssr_uniform) => writer.write(ssr_uniform),
};
commands
.entity(view)
.insert(ViewScreenSpaceReflectionsUniformOffset(uniform_offset));
}
}
impl ExtractComponent for ScreenSpaceReflectionsSettings {
type QueryData = Read<ScreenSpaceReflectionsSettings>;
type QueryFilter = ();
type Out = ScreenSpaceReflectionsUniform;
fn extract_component(settings: QueryItem<'_, Self::QueryData>) -> Option<Self::Out> {
if !DEPTH_TEXTURE_SAMPLING_SUPPORTED {
info_once!(
"Disabling screen-space reflections on this platform because depth textures \
aren't supported correctly"
);
return None;
}
Some((*settings).into())
}
}
impl SpecializedRenderPipeline for ScreenSpaceReflectionsPipeline {
type Key = ScreenSpaceReflectionsPipelineKey;
fn specialize(&self, key: Self::Key) -> RenderPipelineDescriptor {
let mesh_view_layout = self
.mesh_view_layouts
.get_view_layout(key.mesh_pipeline_view_key);
let mut shader_defs = vec![
"DEPTH_PREPASS".into(),
"DEFERRED_PREPASS".into(),
"SCREEN_SPACE_REFLECTIONS".into(),
];
if key.has_environment_maps {
shader_defs.push("ENVIRONMENT_MAP".into());
}
if self.binding_arrays_are_usable {
shader_defs.push("MULTIPLE_LIGHT_PROBES_IN_ARRAY".into());
}
RenderPipelineDescriptor {
label: Some("SSR pipeline".into()),
layout: vec![mesh_view_layout.clone(), self.bind_group_layout.clone()],
vertex: fullscreen_vertex_shader::fullscreen_shader_vertex_state(),
fragment: Some(FragmentState {
shader: SSR_SHADER_HANDLE,
shader_defs,
entry_point: "fragment".into(),
targets: vec![Some(ColorTargetState {
format: if key.is_hdr {
ViewTarget::TEXTURE_FORMAT_HDR
} else {
TextureFormat::bevy_default()
},
blend: None,
write_mask: ColorWrites::ALL,
})],
}),
push_constant_ranges: vec![],
primitive: default(),
depth_stencil: None,
multisample: default(),
}
}
}
impl From<ScreenSpaceReflectionsSettings> for ScreenSpaceReflectionsUniform {
fn from(settings: ScreenSpaceReflectionsSettings) -> Self {
Self {
perceptual_roughness_threshold: settings.perceptual_roughness_threshold,
thickness: settings.thickness,
linear_steps: settings.linear_steps,
linear_march_exponent: settings.linear_march_exponent,
bisection_steps: settings.bisection_steps,
use_secant: settings.use_secant as u32,
}
}
}

View file

@ -0,0 +1,511 @@
// Copyright (c) 2023 Tomasz Stachowiak
//
// This contribution is dual licensed under EITHER OF
//
// Apache License, Version 2.0, (http://www.apache.org/licenses/LICENSE-2.0)
// MIT license (http://opensource.org/licenses/MIT)
//
// at your option.
//
// This is a port of the original [`raymarch.hlsl`] to WGSL. It's deliberately
// kept as close as possible so that patches to the original `raymarch.hlsl`
// have the greatest chances of applying to this version.
//
// [`raymarch.hlsl`]:
// https://gist.github.com/h3r2tic/9c8356bdaefbe80b1a22ae0aaee192db
#define_import_path bevy_pbr::raymarch
#import bevy_pbr::mesh_view_bindings::depth_prepass_texture
#import bevy_pbr::view_transformations::{
direction_world_to_clip,
ndc_to_uv,
perspective_camera_near,
position_world_to_ndc,
}
// Allows us to sample from the depth buffer with bilinear filtering.
@group(1) @binding(2) var depth_linear_sampler: sampler;
// Allows us to sample from the depth buffer with nearest-neighbor filtering.
@group(1) @binding(3) var depth_nearest_sampler: sampler;
// Main code
struct HybridRootFinder {
linear_steps: u32,
bisection_steps: u32,
use_secant: bool,
linear_march_exponent: f32,
jitter: f32,
min_t: f32,
max_t: f32,
}
fn hybrid_root_finder_new_with_linear_steps(v: u32) -> HybridRootFinder {
var res: HybridRootFinder;
res.linear_steps = v;
res.bisection_steps = 0u;
res.use_secant = false;
res.linear_march_exponent = 1.0;
res.jitter = 1.0;
res.min_t = 0.0;
res.max_t = 1.0;
return res;
}
fn hybrid_root_finder_find_root(
root_finder: ptr<function, HybridRootFinder>,
start: vec3<f32>,
end: vec3<f32>,
distance_fn: ptr<function, DepthRaymarchDistanceFn>,
hit_t: ptr<function, f32>,
miss_t: ptr<function, f32>,
hit_d: ptr<function, DistanceWithPenetration>,
) -> bool {
let dir = end - start;
var min_t = (*root_finder).min_t;
var max_t = (*root_finder).max_t;
var min_d = DistanceWithPenetration(0.0, false, 0.0);
var max_d = DistanceWithPenetration(0.0, false, 0.0);
let step_size = (max_t - min_t) / f32((*root_finder).linear_steps);
var intersected = false;
//
// Ray march using linear steps
if ((*root_finder).linear_steps > 0u) {
let candidate_t = mix(
min_t,
max_t,
pow(
(*root_finder).jitter / f32((*root_finder).linear_steps),
(*root_finder).linear_march_exponent
)
);
let candidate = start + dir * candidate_t;
let candidate_d = depth_raymarch_distance_fn_evaluate(distance_fn, candidate);
intersected = candidate_d.distance < 0.0 && candidate_d.valid;
if (intersected) {
max_t = candidate_t;
max_d = candidate_d;
// The `[min_t .. max_t]` interval contains an intersection. End the linear search.
} else {
// No intersection yet. Carry on.
min_t = candidate_t;
min_d = candidate_d;
for (var step = 1u; step < (*root_finder).linear_steps; step += 1u) {
let candidate_t = mix(
(*root_finder).min_t,
(*root_finder).max_t,
pow(
(f32(step) + (*root_finder).jitter) / f32((*root_finder).linear_steps),
(*root_finder).linear_march_exponent
)
);
let candidate = start + dir * candidate_t;
let candidate_d = depth_raymarch_distance_fn_evaluate(distance_fn, candidate);
intersected = candidate_d.distance < 0.0 && candidate_d.valid;
if (intersected) {
max_t = candidate_t;
max_d = candidate_d;
// The `[min_t .. max_t]` interval contains an intersection.
// End the linear search.
break;
} else {
// No intersection yet. Carry on.
min_t = candidate_t;
min_d = candidate_d;
}
}
}
}
*miss_t = min_t;
*hit_t = min_t;
//
// Refine the hit using bisection
if (intersected) {
for (var step = 0u; step < (*root_finder).bisection_steps; step += 1u) {
let mid_t = (min_t + max_t) * 0.5;
let candidate = start + dir * mid_t;
let candidate_d = depth_raymarch_distance_fn_evaluate(distance_fn, candidate);
if (candidate_d.distance < 0.0 && candidate_d.valid) {
// Intersection at the mid point. Refine the first half.
max_t = mid_t;
max_d = candidate_d;
} else {
// No intersection yet at the mid point. Refine the second half.
min_t = mid_t;
min_d = candidate_d;
}
}
if ((*root_finder).use_secant) {
// Finish with one application of the secant method
let total_d = min_d.distance + -max_d.distance;
let mid_t = mix(min_t, max_t, min_d.distance / total_d);
let candidate = start + dir * mid_t;
let candidate_d = depth_raymarch_distance_fn_evaluate(distance_fn, candidate);
// Only accept the result of the secant method if it improves upon
// the previous result.
//
// Technically root_finder should be `abs(candidate_d.distance) <
// min(min_d.distance, -max_d.distance) * frac`, but root_finder seems
// sufficient.
if (abs(candidate_d.distance) < min_d.distance * 0.9 && candidate_d.valid) {
*hit_t = mid_t;
*hit_d = candidate_d;
} else {
*hit_t = max_t;
*hit_d = max_d;
}
return true;
} else {
*hit_t = max_t;
*hit_d = max_d;
return true;
}
} else {
// Mark the conservative miss distance.
*hit_t = min_t;
return false;
}
}
struct DistanceWithPenetration {
/// Distance to the surface of which a root we're trying to find
distance: f32,
/// Whether to consider this sample valid for intersection.
/// Mostly relevant for allowing the ray marcher to travel behind surfaces,
/// as it will mark surfaces it travels under as invalid.
valid: bool,
/// Conservative estimate of depth to which the ray penetrates the marched surface.
penetration: f32,
}
struct DepthRaymarchDistanceFn {
depth_tex_size: vec2<f32>,
march_behind_surfaces: bool,
depth_thickness: f32,
use_sloppy_march: bool,
}
fn depth_raymarch_distance_fn_evaluate(
distance_fn: ptr<function, DepthRaymarchDistanceFn>,
ray_point_cs: vec3<f32>,
) -> DistanceWithPenetration {
let interp_uv = ndc_to_uv(ray_point_cs.xy);
let ray_depth = 1.0 / ray_point_cs.z;
// We're using both point-sampled and bilinear-filtered values from the depth buffer.
//
// That's really stupid but works like magic. For samples taken near the ray origin,
// the discrete nature of the depth buffer becomes a problem. It's not a land of continuous surfaces,
// but a bunch of stacked duplo bricks.
//
// Technically we should be taking discrete steps in distance_fn duplo land, but then we're at the mercy
// of arbitrary quantization of our directions -- and sometimes we'll take a step which would
// claim that the ray is occluded -- even though the underlying smooth surface wouldn't occlude it.
//
// If we instead take linear taps from the depth buffer, we reconstruct the linear surface.
// That fixes acne, but introduces false shadowing near object boundaries, as we now pretend
// that everything is shrink-wrapped by distance_fn continuous 2.5D surface, and our depth thickness
// heuristic ends up falling apart.
//
// The fix is to consider both the smooth and the discrete surfaces, and only claim occlusion
// when the ray descends below both.
//
// The two approaches end up fixing each other's artifacts:
// * The false occlusions due to duplo land are rejected because the ray stays above the smooth surface.
// * The shrink-wrap surface is no longer continuous, so it's possible for rays to miss it.
let linear_depth =
1.0 / textureSampleLevel(depth_prepass_texture, depth_linear_sampler, interp_uv, 0.0);
let unfiltered_depth =
1.0 / textureSampleLevel(depth_prepass_texture, depth_nearest_sampler, interp_uv, 0.0);
var max_depth: f32;
var min_depth: f32;
if ((*distance_fn).use_sloppy_march) {
max_depth = unfiltered_depth;
min_depth = unfiltered_depth;
} else {
max_depth = max(linear_depth, unfiltered_depth);
min_depth = min(linear_depth, unfiltered_depth);
}
let bias = 0.000002;
var res: DistanceWithPenetration;
res.distance = max_depth * (1.0 + bias) - ray_depth;
// distance_fn will be used at the end of the ray march to potentially discard the hit.
res.penetration = ray_depth - min_depth;
if ((*distance_fn).march_behind_surfaces) {
res.valid = res.penetration < (*distance_fn).depth_thickness;
} else {
res.valid = true;
}
return res;
}
struct DepthRayMarchResult {
/// True if the raymarch hit something.
hit: bool,
/// In case of a hit, the normalized distance to it.
///
/// In case of a miss, the furthest the ray managed to travel, which could either be
/// exceeding the max range, or getting behind a surface further than the depth thickness.
///
/// Range: `0..=1` as a lerp factor over `ray_start_cs..=ray_end_cs`.
hit_t: f32,
/// UV correspindong to `hit_t`.
hit_uv: vec2<f32>,
/// The distance that the hit point penetrates into the hit surface.
/// Will normally be non-zero due to limited precision of the ray march.
///
/// In case of a miss: undefined.
hit_penetration: f32,
/// Ditto, within the range `0..DepthRayMarch::depth_thickness_linear_z`
///
/// In case of a miss: undefined.
hit_penetration_frac: f32,
}
struct DepthRayMarch {
/// Number of steps to be taken at regular intervals to find an initial intersection.
/// Must not be zero.
linear_steps: u32,
/// Exponent to be applied in the linear part of the march.
///
/// A value of 1.0 will result in equidistant steps, and higher values will compress
/// the earlier steps, and expand the later ones. This might be desirable in order
/// to get more detail close to objects in SSR or SSGI.
///
/// For optimal performance, this should be a small compile-time unsigned integer,
/// such as 1 or 2.
linear_march_exponent: f32,
/// Number of steps in a bisection (binary search) to perform once the linear search
/// has found an intersection. Helps narrow down the hit, increasing the chance of
/// the secant method finding an accurate hit point.
///
/// Useful when sampling color, e.g. SSR or SSGI, but pointless for contact shadows.
bisection_steps: u32,
/// Approximate the root position using the secant method -- by solving for line-line
/// intersection between the ray approach rate and the surface gradient.
///
/// Useful when sampling color, e.g. SSR or SSGI, but pointless for contact shadows.
use_secant: bool,
/// Jitter to apply to the first step of the linear search; 0..=1 range, mapping
/// to the extent of a single linear step in the first phase of the search.
/// Use 1.0 if you don't want jitter.
jitter: f32,
/// Clip space coordinates (w=1) of the ray.
ray_start_cs: vec3<f32>,
ray_end_cs: vec3<f32>,
/// Should be used for contact shadows, but not for any color bounce, e.g. SSR.
///
/// For SSR etc. this can easily create leaks, but with contact shadows it allows the rays
/// to pass over invalid occlusions (due to thickness), and find potentially valid ones ahead.
///
/// Note that this will cause the linear search to potentially miss surfaces,
/// because when the ray overshoots and ends up penetrating a surface further than
/// `depth_thickness_linear_z`, the ray marcher will just carry on.
///
/// For this reason, this may require a lot of samples, or high depth thickness,
/// so that `depth_thickness_linear_z >= world space ray length / linear_steps`.
march_behind_surfaces: bool,
/// If `true`, the ray marcher only performs nearest lookups of the depth buffer,
/// resulting in aliasing and false occlusion when marching tiny detail.
/// It should work fine for longer traces with fewer rays though.
use_sloppy_march: bool,
/// When marching the depth buffer, we only have 2.5D information, and don't know how
/// thick surfaces are. We shall assume that the depth buffer fragments are little squares
/// with a constant thickness defined by this parameter.
depth_thickness_linear_z: f32,
/// Size of the depth buffer we're marching in, in pixels.
depth_tex_size: vec2<f32>,
}
fn depth_ray_march_new_from_depth(depth_tex_size: vec2<f32>) -> DepthRayMarch {
var res: DepthRayMarch;
res.jitter = 1.0;
res.linear_steps = 4u;
res.bisection_steps = 0u;
res.linear_march_exponent = 1.0;
res.depth_tex_size = depth_tex_size;
res.depth_thickness_linear_z = 1.0;
res.march_behind_surfaces = false;
res.use_sloppy_march = false;
return res;
}
fn depth_ray_march_to_cs_dir_impl(
raymarch: ptr<function, DepthRayMarch>,
dir_cs: vec4<f32>,
infinite: bool,
) {
var end_cs = vec4((*raymarch).ray_start_cs, 1.0) + dir_cs;
// Perform perspective division, but avoid dividing by zero for rays
// heading directly towards the eye.
end_cs /= select(-1.0, 1.0, end_cs.w >= 0.0) * max(1e-10, abs(end_cs.w));
// Clip ray start to the view frustum
var delta_cs = end_cs.xyz - (*raymarch).ray_start_cs;
let near_edge = select(vec3(-1.0, -1.0, 0.0), vec3(1.0, 1.0, 1.0), delta_cs < vec3(0.0));
let dist_to_near_edge = (near_edge - (*raymarch).ray_start_cs) / delta_cs;
let max_dist_to_near_edge = max(dist_to_near_edge.x, dist_to_near_edge.y);
(*raymarch).ray_start_cs += delta_cs * max(0.0, max_dist_to_near_edge);
// Clip ray end to the view frustum
delta_cs = end_cs.xyz - (*raymarch).ray_start_cs;
let far_edge = select(vec3(-1.0, -1.0, 0.0), vec3(1.0, 1.0, 1.0), delta_cs >= vec3(0.0));
let dist_to_far_edge = (far_edge - (*raymarch).ray_start_cs) / delta_cs;
let min_dist_to_far_edge = min(
min(dist_to_far_edge.x, dist_to_far_edge.y),
dist_to_far_edge.z
);
if (infinite) {
delta_cs *= min_dist_to_far_edge;
} else {
// If unbounded, would make the ray reach the end of the frustum
delta_cs *= min(1.0, min_dist_to_far_edge);
}
(*raymarch).ray_end_cs = (*raymarch).ray_start_cs + delta_cs;
}
/// March from a clip-space position (w = 1)
fn depth_ray_march_from_cs(raymarch: ptr<function, DepthRayMarch>, v: vec3<f32>) {
(*raymarch).ray_start_cs = v;
}
/// March to a clip-space position (w = 1)
///
/// Must be called after `from_cs`, as it will clip the world-space ray to the view frustum.
fn depth_ray_march_to_cs(raymarch: ptr<function, DepthRayMarch>, end_cs: vec3<f32>) {
let dir = vec4(end_cs - (*raymarch).ray_start_cs, 0.0) * sign(end_cs.z);
depth_ray_march_to_cs_dir_impl(raymarch, dir, false);
}
/// March towards a clip-space direction. Infinite (ray is extended to cover the whole view frustum).
///
/// Must be called after `from_cs`, as it will clip the world-space ray to the view frustum.
fn depth_ray_march_to_cs_dir(raymarch: ptr<function, DepthRayMarch>, dir: vec4<f32>) {
depth_ray_march_to_cs_dir_impl(raymarch, dir, true);
}
/// March to a world-space position.
///
/// Must be called after `from_cs`, as it will clip the world-space ray to the view frustum.
fn depth_ray_march_to_ws(raymarch: ptr<function, DepthRayMarch>, end: vec3<f32>) {
depth_ray_march_to_cs(raymarch, position_world_to_ndc(end));
}
/// March towards a world-space direction. Infinite (ray is extended to cover the whole view frustum).
///
/// Must be called after `from_cs`, as it will clip the world-space ray to the view frustum.
fn depth_ray_march_to_ws_dir(raymarch: ptr<function, DepthRayMarch>, dir: vec3<f32>) {
depth_ray_march_to_cs_dir_impl(raymarch, direction_world_to_clip(dir), true);
}
/// Perform the ray march.
fn depth_ray_march_march(raymarch: ptr<function, DepthRayMarch>) -> DepthRayMarchResult {
var res = DepthRayMarchResult(false, 0.0, vec2(0.0), 0.0, 0.0);
let ray_start_uv = ndc_to_uv((*raymarch).ray_start_cs.xy);
let ray_end_uv = ndc_to_uv((*raymarch).ray_end_cs.xy);
let ray_uv_delta = ray_end_uv - ray_start_uv;
let ray_len_px = ray_uv_delta * (*raymarch).depth_tex_size;
let min_px_per_step = 1u;
let step_count = max(
2,
min(i32((*raymarch).linear_steps), i32(floor(length(ray_len_px) / f32(min_px_per_step))))
);
let linear_z_to_scaled_linear_z = 1.0 / perspective_camera_near();
let depth_thickness = (*raymarch).depth_thickness_linear_z * linear_z_to_scaled_linear_z;
var distance_fn: DepthRaymarchDistanceFn;
distance_fn.depth_tex_size = (*raymarch).depth_tex_size;
distance_fn.march_behind_surfaces = (*raymarch).march_behind_surfaces;
distance_fn.depth_thickness = depth_thickness;
distance_fn.use_sloppy_march = (*raymarch).use_sloppy_march;
var hit: DistanceWithPenetration;
var hit_t = 0.0;
var miss_t = 0.0;
var root_finder = hybrid_root_finder_new_with_linear_steps(u32(step_count));
root_finder.bisection_steps = (*raymarch).bisection_steps;
root_finder.use_secant = (*raymarch).use_secant;
root_finder.linear_march_exponent = (*raymarch).linear_march_exponent;
root_finder.jitter = (*raymarch).jitter;
let intersected = hybrid_root_finder_find_root(
&root_finder,
(*raymarch).ray_start_cs,
(*raymarch).ray_end_cs,
&distance_fn,
&hit_t,
&miss_t,
&hit
);
res.hit_t = hit_t;
if (intersected && hit.penetration < depth_thickness && hit.distance < depth_thickness) {
res.hit = true;
res.hit_uv = mix(ray_start_uv, ray_end_uv, res.hit_t);
res.hit_penetration = hit.penetration / linear_z_to_scaled_linear_z;
res.hit_penetration_frac = hit.penetration / depth_thickness;
return res;
}
res.hit_t = miss_t;
res.hit_uv = mix(ray_start_uv, ray_end_uv, res.hit_t);
return res;
}

View file

@ -0,0 +1,185 @@
// A postprocessing pass that performs screen-space reflections.
#define_import_path bevy_pbr::ssr
#import bevy_core_pipeline::fullscreen_vertex_shader::FullscreenVertexOutput
#import bevy_pbr::{
lighting,
lighting::{LAYER_BASE, LAYER_CLEARCOAT},
mesh_view_bindings::{view, depth_prepass_texture, deferred_prepass_texture, ssr_settings},
pbr_deferred_functions::pbr_input_from_deferred_gbuffer,
pbr_deferred_types,
pbr_functions,
prepass_utils,
raymarch::{
depth_ray_march_from_cs,
depth_ray_march_march,
depth_ray_march_new_from_depth,
depth_ray_march_to_ws_dir,
},
utils,
view_transformations::{
depth_ndc_to_view_z,
frag_coord_to_ndc,
ndc_to_frag_coord,
ndc_to_uv,
position_view_to_ndc,
position_world_to_ndc,
position_world_to_view,
},
}
#import bevy_render::view::View
#ifdef ENVIRONMENT_MAP
#import bevy_pbr::environment_map
#endif
// The texture representing the color framebuffer.
@group(1) @binding(0) var color_texture: texture_2d<f32>;
// The sampler that lets us sample from the color framebuffer.
@group(1) @binding(1) var color_sampler: sampler;
// Group 1, bindings 2 and 3 are in `raymarch.wgsl`.
// Returns the reflected color in the RGB channel and the specular occlusion in
// the alpha channel.
//
// The general approach here is similar to [1]. We first project the reflection
// ray into screen space. Then we perform uniform steps along that screen-space
// reflected ray, converting each step to view space.
//
// The arguments are:
//
// * `R_world`: The reflection vector in world space.
//
// * `P_world`: The current position in world space.
//
// [1]: https://lettier.github.io/3d-game-shaders-for-beginners/screen-space-reflection.html
fn evaluate_ssr(R_world: vec3<f32>, P_world: vec3<f32>) -> vec4<f32> {
let depth_size = vec2<f32>(textureDimensions(depth_prepass_texture));
var raymarch = depth_ray_march_new_from_depth(depth_size);
depth_ray_march_from_cs(&raymarch, position_world_to_ndc(P_world));
depth_ray_march_to_ws_dir(&raymarch, normalize(R_world));
raymarch.linear_steps = ssr_settings.linear_steps;
raymarch.bisection_steps = ssr_settings.bisection_steps;
raymarch.use_secant = ssr_settings.use_secant != 0u;
raymarch.depth_thickness_linear_z = ssr_settings.thickness;
raymarch.jitter = 1.0; // Disable jitter for now.
raymarch.march_behind_surfaces = false;
let raymarch_result = depth_ray_march_march(&raymarch);
if (raymarch_result.hit) {
return vec4(
textureSampleLevel(color_texture, color_sampler, raymarch_result.hit_uv, 0.0).rgb,
0.0
);
}
return vec4(0.0, 0.0, 0.0, 1.0);
}
@fragment
fn fragment(in: FullscreenVertexOutput) -> @location(0) vec4<f32> {
// Sample the depth.
var frag_coord = in.position;
frag_coord.z = prepass_utils::prepass_depth(in.position, 0u);
// Load the G-buffer data.
let fragment = textureLoad(color_texture, vec2<i32>(frag_coord.xy), 0);
let gbuffer = textureLoad(deferred_prepass_texture, vec2<i32>(frag_coord.xy), 0);
let pbr_input = pbr_input_from_deferred_gbuffer(frag_coord, gbuffer);
// Don't do anything if the surface is too rough, since we can't blur or do
// temporal accumulation yet.
let perceptual_roughness = pbr_input.material.perceptual_roughness;
if (perceptual_roughness > ssr_settings.perceptual_roughness_threshold) {
return fragment;
}
// Unpack the PBR input.
var specular_occlusion = pbr_input.specular_occlusion;
let world_position = pbr_input.world_position.xyz;
let N = pbr_input.N;
let V = pbr_input.V;
// Calculate the reflection vector.
let R = reflect(-V, N);
// Do the raymarching.
let ssr_specular = evaluate_ssr(R, world_position);
var indirect_light = ssr_specular.rgb;
specular_occlusion *= ssr_specular.a;
// Sample the environment map if necessary.
//
// This will take the specular part of the environment map into account if
// the ray missed. Otherwise, it only takes the diffuse part.
//
// TODO: Merge this with the duplicated code in `apply_pbr_lighting`.
#ifdef ENVIRONMENT_MAP
// Unpack values required for environment mapping.
let base_color = pbr_input.material.base_color.rgb;
let metallic = pbr_input.material.metallic;
let reflectance = pbr_input.material.reflectance;
let specular_transmission = pbr_input.material.specular_transmission;
let diffuse_transmission = pbr_input.material.diffuse_transmission;
let diffuse_occlusion = pbr_input.diffuse_occlusion;
#ifdef STANDARD_MATERIAL_CLEARCOAT
// Do the above calculations again for the clearcoat layer. Remember that
// the clearcoat can have its own roughness and its own normal.
let clearcoat = pbr_input.material.clearcoat;
let clearcoat_perceptual_roughness = pbr_input.material.clearcoat_perceptual_roughness;
let clearcoat_roughness = lighting::perceptualRoughnessToRoughness(clearcoat_perceptual_roughness);
let clearcoat_N = pbr_input.clearcoat_N;
let clearcoat_NdotV = max(dot(clearcoat_N, pbr_input.V), 0.0001);
let clearcoat_R = reflect(-pbr_input.V, clearcoat_N);
#endif // STANDARD_MATERIAL_CLEARCOAT
// Calculate various other values needed for environment mapping.
let roughness = lighting::perceptualRoughnessToRoughness(perceptual_roughness);
let diffuse_color = pbr_functions::calculate_diffuse_color(
base_color,
metallic,
specular_transmission,
diffuse_transmission
);
let NdotV = max(dot(N, V), 0.0001);
let F_ab = lighting::F_AB(perceptual_roughness, NdotV);
let F0 = pbr_functions::calculate_F0(base_color, metallic, reflectance);
// Pack all the values into a structure.
var lighting_input: lighting::LightingInput;
lighting_input.layers[LAYER_BASE].NdotV = NdotV;
lighting_input.layers[LAYER_BASE].N = N;
lighting_input.layers[LAYER_BASE].R = R;
lighting_input.layers[LAYER_BASE].perceptual_roughness = perceptual_roughness;
lighting_input.layers[LAYER_BASE].roughness = roughness;
lighting_input.P = world_position.xyz;
lighting_input.V = V;
lighting_input.diffuse_color = diffuse_color;
lighting_input.F0_ = F0;
lighting_input.F_ab = F_ab;
#ifdef STANDARD_MATERIAL_CLEARCOAT
lighting_input.layers[LAYER_CLEARCOAT].NdotV = clearcoat_NdotV;
lighting_input.layers[LAYER_CLEARCOAT].N = clearcoat_N;
lighting_input.layers[LAYER_CLEARCOAT].R = clearcoat_R;
lighting_input.layers[LAYER_CLEARCOAT].perceptual_roughness = clearcoat_perceptual_roughness;
lighting_input.layers[LAYER_CLEARCOAT].roughness = clearcoat_roughness;
lighting_input.clearcoat_strength = clearcoat;
#endif // STANDARD_MATERIAL_CLEARCOAT
// Sample the environment map.
let environment_light = environment_map::environment_map_light(&lighting_input, false);
// Accumulate the environment map light.
indirect_light += view.exposure *
(environment_light.diffuse * diffuse_occlusion +
environment_light.specular * specular_occlusion);
#endif
// Write the results.
return vec4(fragment.rgb + indirect_light, 1.0);
}

View file

@ -8,7 +8,8 @@ mod camera_controller;
use bevy::{ use bevy::{
pbr::{ pbr::{
experimental::meshlet::{MaterialMeshletMeshBundle, MeshletPlugin}, experimental::meshlet::{MaterialMeshletMeshBundle, MeshletPlugin},
CascadeShadowConfigBuilder, DirectionalLightShadowMap, CascadeShadowConfigBuilder, DirectionalLightShadowMap, ScreenSpaceReflectionsBundle,
ScreenSpaceReflectionsSettings,
}, },
prelude::*, prelude::*,
render::render_resource::AsBindGroup, render::render_resource::AsBindGroup,
@ -57,6 +58,14 @@ fn setup(
intensity: 150.0, intensity: 150.0,
}, },
CameraController::default(), CameraController::default(),
ScreenSpaceReflectionsBundle {
settings: ScreenSpaceReflectionsSettings {
perceptual_roughness_threshold: 0.1,
thickness: 0.1,
..Default::default()
},
..Default::default()
},
)); ));
commands.spawn(DirectionalLightBundle { commands.spawn(DirectionalLightBundle {
@ -123,8 +132,8 @@ fn setup(
commands.spawn(PbrBundle { commands.spawn(PbrBundle {
mesh: meshes.add(Plane3d::default().mesh().size(5.0, 5.0)), mesh: meshes.add(Plane3d::default().mesh().size(5.0, 5.0)),
material: standard_materials.add(StandardMaterial { material: standard_materials.add(StandardMaterial {
base_color: Color::WHITE, base_color: Color::BLACK,
perceptual_roughness: 1.0, perceptual_roughness: 0.0,
..default() ..default()
}), }),
..default() ..default()

427
examples/3d/ssr.rs Normal file
View file

@ -0,0 +1,427 @@
//! Demonstrates screen space reflections in deferred rendering.
use std::ops::Range;
use bevy::{
color::palettes::css::{BLACK, WHITE},
core_pipeline::{fxaa::Fxaa, Skybox},
input::mouse::MouseWheel,
math::{vec3, vec4},
pbr::{
DefaultOpaqueRendererMethod, ExtendedMaterial, MaterialExtension,
ScreenSpaceReflectionsBundle, ScreenSpaceReflectionsSettings,
},
prelude::*,
render::{
render_resource::{AsBindGroup, ShaderRef, ShaderType},
texture::{
ImageAddressMode, ImageFilterMode, ImageLoaderSettings, ImageSampler,
ImageSamplerDescriptor,
},
},
};
// The speed of camera movement.
const CAMERA_KEYBOARD_ZOOM_SPEED: f32 = 0.1;
const CAMERA_KEYBOARD_ORBIT_SPEED: f32 = 0.02;
const CAMERA_MOUSE_WHEEL_ZOOM_SPEED: f32 = 0.25;
// We clamp camera distances to this range.
const CAMERA_ZOOM_RANGE: Range<f32> = 2.0..12.0;
static TURN_SSR_OFF_HELP_TEXT: &str = "Press Space to turn screen-space reflections off";
static TURN_SSR_ON_HELP_TEXT: &str = "Press Space to turn screen-space reflections on";
static MOVE_CAMERA_HELP_TEXT: &str =
"Press WASD or use the mouse wheel to pan and orbit the camera";
static SWITCH_TO_FLIGHT_HELMET_HELP_TEXT: &str = "Press Enter to switch to the flight helmet model";
static SWITCH_TO_CUBE_HELP_TEXT: &str = "Press Enter to switch to the cube model";
/// A custom [`ExtendedMaterial`] that creates animated water ripples.
#[derive(Asset, TypePath, AsBindGroup, Debug, Clone)]
struct Water {
/// The normal map image.
///
/// Note that, like all normal maps, this must not be loaded as sRGB.
#[texture(100)]
#[sampler(101)]
normals: Handle<Image>,
// Parameters to the water shader.
#[uniform(102)]
settings: WaterSettings,
}
/// Parameters to the water shader.
#[derive(ShaderType, Debug, Clone)]
struct WaterSettings {
/// How much to displace each octave each frame, in the u and v directions.
/// Two octaves are packed into each `vec4`.
octave_vectors: [Vec4; 2],
/// How wide the waves are in each octave.
octave_scales: Vec4,
/// How high the waves are in each octave.
octave_strengths: Vec4,
}
/// The current settings that the user has chosen.
#[derive(Resource)]
struct AppSettings {
/// Whether screen space reflections are on.
ssr_on: bool,
/// Which model is being displayed.
displayed_model: DisplayedModel,
}
/// Which model is being displayed.
#[derive(Default)]
enum DisplayedModel {
/// The cube is being displayed.
#[default]
Cube,
/// The flight helmet is being displayed.
FlightHelmet,
}
/// A marker component for the cube model.
#[derive(Component)]
struct CubeModel;
/// A marker component for the flight helmet model.
#[derive(Component)]
struct FlightHelmetModel;
fn main() {
// Enable deferred rendering, which is necessary for screen-space
// reflections at this time. Disable multisampled antialiasing, as deferred
// rendering doesn't support that.
App::new()
.insert_resource(Msaa::Off)
.insert_resource(DefaultOpaqueRendererMethod::deferred())
.init_resource::<AppSettings>()
.add_plugins(DefaultPlugins.set(WindowPlugin {
primary_window: Some(Window {
title: "Bevy Screen Space Reflections Example".into(),
..default()
}),
..default()
}))
.add_plugins(MaterialPlugin::<ExtendedMaterial<StandardMaterial, Water>>::default())
.add_systems(Startup, setup)
.add_systems(Update, rotate_model)
.add_systems(Update, move_camera)
.add_systems(Update, adjust_app_settings)
.run();
}
// Set up the scene.
fn setup(
mut commands: Commands,
mut meshes: ResMut<Assets<Mesh>>,
mut standard_materials: ResMut<Assets<StandardMaterial>>,
mut water_materials: ResMut<Assets<ExtendedMaterial<StandardMaterial, Water>>>,
asset_server: Res<AssetServer>,
app_settings: Res<AppSettings>,
) {
spawn_cube(
&mut commands,
&asset_server,
&mut meshes,
&mut standard_materials,
);
spawn_flight_helmet(&mut commands, &asset_server);
spawn_water(
&mut commands,
&asset_server,
&mut meshes,
&mut water_materials,
);
spawn_camera(&mut commands, &asset_server);
spawn_text(&mut commands, &asset_server, &app_settings);
}
// Spawns the rotating cube.
fn spawn_cube(
commands: &mut Commands,
asset_server: &AssetServer,
meshes: &mut Assets<Mesh>,
standard_materials: &mut Assets<StandardMaterial>,
) {
commands
.spawn(PbrBundle {
mesh: meshes.add(Cuboid::new(1.0, 1.0, 1.0)),
material: standard_materials.add(StandardMaterial {
base_color: Color::from(WHITE),
base_color_texture: Some(asset_server.load("branding/icon.png")),
..default()
}),
transform: Transform::from_xyz(0.0, 0.5, 0.0),
..default()
})
.insert(CubeModel);
}
// Spawns the flight helmet.
fn spawn_flight_helmet(commands: &mut Commands, asset_server: &AssetServer) {
commands
.spawn(SceneBundle {
scene: asset_server.load("models/FlightHelmet/FlightHelmet.gltf#Scene0"),
transform: Transform::from_scale(Vec3::splat(2.5)),
..default()
})
.insert(FlightHelmetModel)
.insert(Visibility::Hidden);
}
// Spawns the water plane.
fn spawn_water(
commands: &mut Commands,
asset_server: &AssetServer,
meshes: &mut Assets<Mesh>,
water_materials: &mut Assets<ExtendedMaterial<StandardMaterial, Water>>,
) {
commands.spawn(MaterialMeshBundle {
mesh: meshes.add(Plane3d::new(Vec3::Y, Vec2::splat(1.0))),
material: water_materials.add(ExtendedMaterial {
base: StandardMaterial {
base_color: BLACK.into(),
perceptual_roughness: 0.0,
..default()
},
extension: Water {
normals: asset_server.load_with_settings::<Image, ImageLoaderSettings>(
"textures/water_normals.png",
|settings| {
settings.is_srgb = false;
settings.sampler = ImageSampler::Descriptor(ImageSamplerDescriptor {
address_mode_u: ImageAddressMode::Repeat,
address_mode_v: ImageAddressMode::Repeat,
mag_filter: ImageFilterMode::Linear,
min_filter: ImageFilterMode::Linear,
..default()
});
},
),
// These water settings are just random values to create some
// variety.
settings: WaterSettings {
octave_vectors: [
vec4(0.080, 0.059, 0.073, -0.062),
vec4(0.153, 0.138, -0.149, -0.195),
],
octave_scales: vec4(1.0, 2.1, 7.9, 14.9) * 5.0,
octave_strengths: vec4(0.16, 0.18, 0.093, 0.044),
},
},
}),
transform: Transform::from_scale(Vec3::splat(100.0)),
..default()
});
}
// Spawns the camera.
fn spawn_camera(commands: &mut Commands, asset_server: &AssetServer) {
// Create the camera. Add an environment map and skybox so the water has
// something interesting to reflect, other than the cube. Enable deferred
// rendering by adding depth and deferred prepasses. Turn on FXAA to make
// the scene look a little nicer. Finally, add screen space reflections.
commands
.spawn(Camera3dBundle {
transform: Transform::from_translation(vec3(-1.25, 2.25, 4.5))
.looking_at(Vec3::ZERO, Vec3::Y),
camera: Camera {
hdr: true,
..default()
},
..default()
})
.insert(EnvironmentMapLight {
diffuse_map: asset_server.load("environment_maps/pisa_diffuse_rgb9e5_zstd.ktx2"),
specular_map: asset_server.load("environment_maps/pisa_specular_rgb9e5_zstd.ktx2"),
intensity: 5000.0,
})
.insert(Skybox {
image: asset_server.load("environment_maps/pisa_specular_rgb9e5_zstd.ktx2"),
brightness: 5000.0,
})
.insert(ScreenSpaceReflectionsBundle::default())
.insert(Fxaa::default());
}
// Spawns the help text.
fn spawn_text(commands: &mut Commands, asset_server: &AssetServer, app_settings: &AppSettings) {
commands.spawn(
TextBundle {
text: create_text(asset_server, app_settings),
..TextBundle::default()
}
.with_style(Style {
position_type: PositionType::Absolute,
bottom: Val::Px(10.0),
left: Val::Px(10.0),
..default()
}),
);
}
// Creates or recreates the help text.
fn create_text(asset_server: &AssetServer, app_settings: &AppSettings) -> Text {
Text::from_section(
format!(
"{}\n{}\n{}",
match app_settings.displayed_model {
DisplayedModel::Cube => SWITCH_TO_FLIGHT_HELMET_HELP_TEXT,
DisplayedModel::FlightHelmet => SWITCH_TO_CUBE_HELP_TEXT,
},
if app_settings.ssr_on {
TURN_SSR_OFF_HELP_TEXT
} else {
TURN_SSR_ON_HELP_TEXT
},
MOVE_CAMERA_HELP_TEXT
),
TextStyle {
font: asset_server.load("fonts/FiraMono-Medium.ttf"),
font_size: 24.0,
..default()
},
)
}
impl MaterialExtension for Water {
fn deferred_fragment_shader() -> ShaderRef {
"shaders/water_material.wgsl".into()
}
}
/// Rotates the model on the Y axis a bit every frame.
fn rotate_model(
mut query: Query<&mut Transform, Or<(With<CubeModel>, With<FlightHelmetModel>)>>,
time: Res<Time>,
) {
for mut transform in query.iter_mut() {
transform.rotation = Quat::from_euler(EulerRot::XYZ, 0.0, time.elapsed_seconds(), 0.0);
}
}
// Processes input related to camera movement.
fn move_camera(
keyboard_input: Res<ButtonInput<KeyCode>>,
mut mouse_wheel_input: EventReader<MouseWheel>,
mut cameras: Query<&mut Transform, With<Camera>>,
) {
let (mut distance_delta, mut theta_delta) = (0.0, 0.0);
// Handle keyboard events.
if keyboard_input.pressed(KeyCode::KeyW) {
distance_delta -= CAMERA_KEYBOARD_ZOOM_SPEED;
}
if keyboard_input.pressed(KeyCode::KeyS) {
distance_delta += CAMERA_KEYBOARD_ZOOM_SPEED;
}
if keyboard_input.pressed(KeyCode::KeyA) {
theta_delta += CAMERA_KEYBOARD_ORBIT_SPEED;
}
if keyboard_input.pressed(KeyCode::KeyD) {
theta_delta -= CAMERA_KEYBOARD_ORBIT_SPEED;
}
// Handle mouse events.
for mouse_wheel_event in mouse_wheel_input.read() {
distance_delta -= mouse_wheel_event.y * CAMERA_MOUSE_WHEEL_ZOOM_SPEED;
}
// Update transforms.
for mut camera_transform in cameras.iter_mut() {
let local_z = camera_transform.local_z().as_vec3().normalize_or_zero();
if distance_delta != 0.0 {
camera_transform.translation = (camera_transform.translation.length() + distance_delta)
.clamp(CAMERA_ZOOM_RANGE.start, CAMERA_ZOOM_RANGE.end)
* local_z;
}
if theta_delta != 0.0 {
camera_transform
.translate_around(Vec3::ZERO, Quat::from_axis_angle(Vec3::Y, theta_delta));
camera_transform.look_at(Vec3::ZERO, Vec3::Y);
}
}
}
// Adjusts app settings per user input.
#[allow(clippy::too_many_arguments)]
fn adjust_app_settings(
mut commands: Commands,
asset_server: Res<AssetServer>,
keyboard_input: Res<ButtonInput<KeyCode>>,
mut app_settings: ResMut<AppSettings>,
mut cameras: Query<Entity, With<Camera>>,
mut cube_models: Query<&mut Visibility, (With<CubeModel>, Without<FlightHelmetModel>)>,
mut flight_helmet_models: Query<&mut Visibility, (Without<CubeModel>, With<FlightHelmetModel>)>,
mut text: Query<&mut Text>,
) {
// If there are no changes, we're going to bail for efficiency. Record that
// here.
let mut any_changes = false;
// If the user pressed Space, toggle SSR.
if keyboard_input.just_pressed(KeyCode::Space) {
app_settings.ssr_on = !app_settings.ssr_on;
any_changes = true;
}
// If the user pressed Enter, switch models.
if keyboard_input.just_pressed(KeyCode::Enter) {
app_settings.displayed_model = match app_settings.displayed_model {
DisplayedModel::Cube => DisplayedModel::FlightHelmet,
DisplayedModel::FlightHelmet => DisplayedModel::Cube,
};
any_changes = true;
}
// If there were no changes, bail.
if !any_changes {
return;
}
// Update SSR settings.
for camera in cameras.iter_mut() {
if app_settings.ssr_on {
commands
.entity(camera)
.insert(ScreenSpaceReflectionsSettings::default());
} else {
commands
.entity(camera)
.remove::<ScreenSpaceReflectionsSettings>();
}
}
// Set cube model visibility.
for mut cube_visibility in cube_models.iter_mut() {
*cube_visibility = match app_settings.displayed_model {
DisplayedModel::Cube => Visibility::Visible,
_ => Visibility::Hidden,
}
}
// Set flight helmet model visibility.
for mut flight_helmet_visibility in flight_helmet_models.iter_mut() {
*flight_helmet_visibility = match app_settings.displayed_model {
DisplayedModel::FlightHelmet => Visibility::Visible,
_ => Visibility::Hidden,
};
}
// Update the help text.
for mut text in text.iter_mut() {
*text = create_text(&asset_server, &app_settings);
}
}
impl Default for AppSettings {
fn default() -> Self {
Self {
ssr_on: true,
displayed_model: default(),
}
}
}

View file

@ -152,6 +152,7 @@ Example | Description
[Reflection Probes](../examples/3d/reflection_probes.rs) | Demonstrates reflection probes [Reflection Probes](../examples/3d/reflection_probes.rs) | Demonstrates reflection probes
[Render to Texture](../examples/3d/render_to_texture.rs) | Shows how to render to a texture, useful for mirrors, UI, or exporting images [Render to Texture](../examples/3d/render_to_texture.rs) | Shows how to render to a texture, useful for mirrors, UI, or exporting images
[Screen Space Ambient Occlusion](../examples/3d/ssao.rs) | A scene showcasing screen space ambient occlusion [Screen Space Ambient Occlusion](../examples/3d/ssao.rs) | A scene showcasing screen space ambient occlusion
[Screen Space Reflections](../examples/3d/ssr.rs) | Demonstrates screen space reflections with water ripples
[Shadow Biases](../examples/3d/shadow_biases.rs) | Demonstrates how shadow biases affect shadows in a 3d scene [Shadow Biases](../examples/3d/shadow_biases.rs) | Demonstrates how shadow biases affect shadows in a 3d scene
[Shadow Caster and Receiver](../examples/3d/shadow_caster_receiver.rs) | Demonstrates how to prevent meshes from casting/receiving shadows in a 3d scene [Shadow Caster and Receiver](../examples/3d/shadow_caster_receiver.rs) | Demonstrates how to prevent meshes from casting/receiving shadows in a 3d scene
[Skybox](../examples/3d/skybox.rs) | Load a cubemap texture onto a cube like a skybox and cycle through different compressed texture formats. [Skybox](../examples/3d/skybox.rs) | Load a cubemap texture onto a cube like a skybox and cycle through different compressed texture formats.