Screen Space Ambient Occlusion (SSAO) MVP (#7402)

![image](https://github.com/bevyengine/bevy/assets/47158642/dbb62645-f639-4f2b-b84b-26fd915c186d)

# Objective

- Add Screen space ambient occlusion (SSAO). SSAO approximates
small-scale, local occlusion of _indirect_ diffuse light between
objects. SSAO does not apply to direct lighting, such as point or
directional lights.
- This darkens creases, e.g. on staircases, and gives nice contact
shadows where objects meet, giving entities a more "grounded" feel.
- Closes https://github.com/bevyengine/bevy/issues/3632.

## Solution

- Implement the GTAO algorithm.
-
https://www.activision.com/cdn/research/Practical_Real_Time_Strategies_for_Accurate_Indirect_Occlusion_NEW%20VERSION_COLOR.pdf
-
https://blog.selfshadow.com/publications/s2016-shading-course/activision/s2016_pbs_activision_occlusion.pdf
- Source code heavily based on [Intel's
XeGTAO](0d177ce06b/Source/Rendering/Shaders/XeGTAO.hlsli).
- Add an SSAO bevy example.

## Algorithm Overview
* Run a depth and normal prepass
* Create downscaled mips of the depth texture (preprocess_depths pass)
* GTAO pass - for each pixel, take several random samples from the
depth+normal buffers, reconstruct world position, raytrace in screen
space to estimate occlusion. Rather then doing completely random samples
on a hemisphere, you choose random _slices_ of the hemisphere, and then
can analytically compute the full occlusion of that slice. Also compute
edges based on depth differences here.
* Spatial denoise pass - bilateral blur, using edge detection to not
blur over edges. This is the final SSAO result.
* Main pass - if SSAO exists, sample the SSAO texture, and set occlusion
to be the minimum of ssao/material occlusion. This then feeds into the
rest of the PBR shader as normal.

---

## Future Improvements
- Maybe remove the low quality preset for now (too noisy)
- WebGPU fallback (see below)
- Faster depth->world position (see reverted code)
- Bent normals 
- Try interleaved gradient noise or spatiotemporal blue noise
- Replace the spatial denoiser with a combined spatial+temporal denoiser
- Render at half resolution and use a bilateral upsample
- Better multibounce approximation
(https://drive.google.com/file/d/1SyagcEVplIm2KkRD3WQYSO9O0Iyi1hfy/view)

## Far-Future Performance Improvements
- F16 math (missing naga-wgsl support
https://github.com/gfx-rs/naga/issues/1884)
- Faster coordinate space conversion for normals
- Faster depth mipchain creation
(https://github.com/GPUOpen-Effects/FidelityFX-SPD) (wgpu/naga does not
currently support subgroup ops)
- Deinterleaved SSAO for better cache efficiency
(https://developer.nvidia.com/sites/default/files/akamai/gameworks/samples/DeinterleavedTexturing.pdf)

## Other Interesting Papers
- Visibility bitmask
(https://link.springer.com/article/10.1007/s00371-022-02703-y,
https://cdrinmatane.github.io/posts/cgspotlight-slides/)
- Screen space diffuse lighting
(https://github.com/Patapom/GodComplex/blob/master/Tests/TestHBIL/2018%20Mayaux%20-%20Horizon-Based%20Indirect%20Lighting%20(HBIL).pdf)

## Platform Support
* SSAO currently does not work on DirectX12 due to issues with wgpu and
naga:
  * https://github.com/gfx-rs/wgpu/pull/3798
  * https://github.com/gfx-rs/naga/pull/2353
* SSAO currently does not work on WebGPU because r16float is not a valid
storage texture format
https://gpuweb.github.io/gpuweb/wgsl/#storage-texel-formats. We can fix
this with a fallback to r32float.

---

## Changelog

- Added ScreenSpaceAmbientOcclusionSettings,
ScreenSpaceAmbientOcclusionQualityLevel, and
ScreenSpaceAmbientOcclusionBundle

---------

Co-authored-by: IceSentry <c.giguere42@gmail.com>
Co-authored-by: IceSentry <IceSentry@users.noreply.github.com>
Co-authored-by: Daniel Chia <danstryder@gmail.com>
Co-authored-by: Elabajaba <Elabajaba@users.noreply.github.com>
Co-authored-by: Robert Swain <robert.swain@gmail.com>
Co-authored-by: robtfm <50659922+robtfm@users.noreply.github.com>
Co-authored-by: Brandon Dyer <brandondyer64@gmail.com>
Co-authored-by: Edgar Geier <geieredgar@gmail.com>
Co-authored-by: Nicola Papale <nicopap@users.noreply.github.com>
Co-authored-by: Carter Anderson <mcanders1@gmail.com>
This commit is contained in:
JMS55 2023-06-18 17:05:55 -04:00 committed by GitHub
parent 6c86545736
commit af9c945f40
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23
18 changed files with 1678 additions and 30 deletions

View file

@ -517,6 +517,16 @@ description = "Create a custom material to draw 3d lines"
category = "3D Rendering"
wasm = true
[[example]]
name = "ssao"
path = "examples/3d/ssao.rs"
[package.metadata.example.ssao]
name = "Screen Space Ambient Occlusion"
description = "A scene showcasing screen space ambient occlusion"
category = "3D Rendering"
wasm = false
[[example]]
name = "spotlight"
path = "examples/3d/spotlight.rs"

View file

@ -94,7 +94,6 @@ impl ViewNode for PrepassNode {
stencil_ops: None,
}),
});
if let Some(viewport) = camera.viewport.as_ref() {
render_pass.set_camera_viewport(viewport);
}

View file

@ -12,6 +12,7 @@ mod parallax;
mod pbr_material;
mod prepass;
mod render;
mod ssao;
pub use alpha::*;
pub use bundle::*;
@ -23,6 +24,7 @@ pub use parallax::*;
pub use pbr_material::*;
pub use prepass::*;
pub use render::*;
pub use ssao::*;
pub mod prelude {
#[doc(hidden)]
@ -38,6 +40,7 @@ pub mod prelude {
material::{Material, MaterialPlugin},
parallax::ParallaxMappingMethod,
pbr_material::StandardMaterial,
ssao::ScreenSpaceAmbientOcclusionPlugin,
};
}
@ -184,6 +187,7 @@ impl Plugin for PbrPlugin {
prepass_enabled: self.prepass_enabled,
..Default::default()
})
.add_plugin(ScreenSpaceAmbientOcclusionPlugin)
.add_plugin(EnvironmentMapPlugin)
.init_resource::<AmbientLight>()
.init_resource::<GlobalVisiblePointLights>()

View file

@ -1,7 +1,7 @@
use crate::{
render, AlphaMode, DrawMesh, DrawPrepass, EnvironmentMapLight, MeshPipeline, MeshPipelineKey,
MeshUniform, PrepassPipelinePlugin, PrepassPlugin, RenderLightSystems, SetMeshBindGroup,
SetMeshViewBindGroup, Shadow,
MeshUniform, PrepassPipelinePlugin, PrepassPlugin, RenderLightSystems,
ScreenSpaceAmbientOcclusionSettings, SetMeshBindGroup, SetMeshViewBindGroup, Shadow,
};
use bevy_app::{App, Plugin};
use bevy_asset::{AddAsset, AssetEvent, AssetServer, Assets, Handle};
@ -387,6 +387,7 @@ pub fn queue_material_meshes<M: Material>(
Option<&Tonemapping>,
Option<&DebandDither>,
Option<&EnvironmentMapLight>,
Option<&ScreenSpaceAmbientOcclusionSettings>,
Option<&NormalPrepass>,
Option<&TemporalAntiAliasSettings>,
&mut RenderPhase<Opaque3d>,
@ -402,6 +403,7 @@ pub fn queue_material_meshes<M: Material>(
tonemapping,
dither,
environment_map,
ssao,
normal_prepass,
taa_settings,
mut opaque_phase,
@ -455,6 +457,10 @@ pub fn queue_material_meshes<M: Material>(
}
}
if ssao.is_some() {
view_key |= MeshPipelineKey::SCREEN_SPACE_AMBIENT_OCCLUSION;
}
let rangefinder = view.rangefinder3d();
for visible_entity in &visible_entities.entities {
if let Ok((material_handle, mesh_handle, mesh_uniform)) =

View file

@ -1,9 +1,9 @@
use crate::{
environment_map, prepass, EnvironmentMapLight, FogMeta, GlobalLightMeta, GpuFog, GpuLights,
GpuPointLights, LightMeta, NotShadowCaster, NotShadowReceiver, PreviousGlobalTransform,
ShadowSamplers, ViewClusterBindings, ViewFogUniformOffset, ViewLightsUniformOffset,
ViewShadowBindings, CLUSTERED_FORWARD_STORAGE_BUFFER_COUNT, MAX_CASCADES_PER_LIGHT,
MAX_DIRECTIONAL_LIGHTS,
ScreenSpaceAmbientOcclusionTextures, ShadowSamplers, ViewClusterBindings, ViewFogUniformOffset,
ViewLightsUniformOffset, ViewShadowBindings, CLUSTERED_FORWARD_STORAGE_BUFFER_COUNT,
MAX_CASCADES_PER_LIGHT, MAX_DIRECTIONAL_LIGHTS,
};
use bevy_app::Plugin;
use bevy_asset::{load_internal_asset, Assets, Handle, HandleUntyped};
@ -433,22 +433,33 @@ impl FromWorld for MeshPipeline {
},
count: None,
},
// Screen space ambient occlusion texture
BindGroupLayoutEntry {
binding: 11,
visibility: ShaderStages::FRAGMENT,
ty: BindingType::Texture {
multisampled: false,
sample_type: TextureSampleType::Float { filterable: false },
view_dimension: TextureViewDimension::D2,
},
count: None,
},
];
// EnvironmentMapLight
let environment_map_entries =
environment_map::get_bind_group_layout_entries([11, 12, 13]);
environment_map::get_bind_group_layout_entries([12, 13, 14]);
entries.extend_from_slice(&environment_map_entries);
// Tonemapping
let tonemapping_lut_entries = get_lut_bind_group_layout_entries([14, 15]);
let tonemapping_lut_entries = get_lut_bind_group_layout_entries([15, 16]);
entries.extend_from_slice(&tonemapping_lut_entries);
if cfg!(any(not(feature = "webgl"), not(target_arch = "wasm32")))
|| (cfg!(all(feature = "webgl", target_arch = "wasm32")) && !multisampled)
{
entries.extend_from_slice(&prepass::get_bind_group_layout_entries(
[16, 17, 18],
[17, 18, 19],
multisampled,
));
}
@ -586,8 +597,9 @@ bitflags::bitflags! {
const MAY_DISCARD = (1 << 6); // Guards shader codepaths that may discard, allowing early depth tests in most cases
// See: https://www.khronos.org/opengl/wiki/Early_Fragment_Test
const ENVIRONMENT_MAP = (1 << 7);
const DEPTH_CLAMP_ORTHO = (1 << 8);
const TAA = (1 << 9);
const SCREEN_SPACE_AMBIENT_OCCLUSION = (1 << 8);
const DEPTH_CLAMP_ORTHO = (1 << 9);
const TAA = (1 << 10);
const BLEND_RESERVED_BITS = Self::BLEND_MASK_BITS << Self::BLEND_SHIFT_BITS; // ← Bitmask reserving bits for the blend state
const BLEND_OPAQUE = (0 << Self::BLEND_SHIFT_BITS); // ← Values are just sequential within the mask, and can range from 0 to 3
const BLEND_PREMULTIPLIED_ALPHA = (1 << Self::BLEND_SHIFT_BITS); //
@ -727,6 +739,10 @@ impl SpecializedMeshPipeline for MeshPipeline {
bind_group_layout.push(self.mesh_layout.clone());
};
if key.contains(MeshPipelineKey::SCREEN_SPACE_AMBIENT_OCCLUSION) {
shader_defs.push("SCREEN_SPACE_AMBIENT_OCCLUSION".into());
}
let vertex_buffer_layout = layout.get_layout(&vertex_attributes)?;
let (label, blend, depth_write_enabled);
@ -974,6 +990,7 @@ pub fn queue_mesh_view_bind_groups(
Entity,
&ViewShadowBindings,
&ViewClusterBindings,
Option<&ScreenSpaceAmbientOcclusionTextures>,
Option<&ViewPrepassTextures>,
Option<&EnvironmentMapLight>,
&Tonemapping,
@ -1003,11 +1020,17 @@ pub fn queue_mesh_view_bind_groups(
entity,
view_shadow_bindings,
view_cluster_bindings,
ssao_textures,
prepass_textures,
environment_map,
tonemapping,
) in &views
{
let fallback_ssao = fallback_images
.image_for_samplecount(1)
.texture_view
.clone();
let layout = if msaa.samples() > 1 {
&mesh_pipeline.view_layout_multisampled
} else {
@ -1063,18 +1086,26 @@ pub fn queue_mesh_view_bind_groups(
binding: 10,
resource: fog_binding.clone(),
},
BindGroupEntry {
binding: 11,
resource: BindingResource::TextureView(
ssao_textures
.map(|t| &t.screen_space_ambient_occlusion_texture.default_view)
.unwrap_or(&fallback_ssao),
),
},
];
let env_map = environment_map::get_bindings(
environment_map,
&images,
&fallback_cubemap,
[11, 12, 13],
[12, 13, 14],
);
entries.extend_from_slice(&env_map);
let tonemapping_luts =
get_lut_bindings(&images, &tonemapping_luts, tonemapping, [14, 15]);
get_lut_bindings(&images, &tonemapping_luts, tonemapping, [15, 16]);
entries.extend_from_slice(&tonemapping_luts);
// When using WebGL, we can't have a depth texture with multisampling
@ -1086,7 +1117,7 @@ pub fn queue_mesh_view_bind_groups(
&mut fallback_images,
&mut fallback_depths,
&msaa,
[16, 17, 18],
[17, 18, 19],
));
}

View file

@ -47,29 +47,32 @@ var<uniform> globals: Globals;
var<uniform> fog: Fog;
@group(0) @binding(11)
var environment_map_diffuse: texture_cube<f32>;
var screen_space_ambient_occlusion_texture: texture_2d<f32>;
@group(0) @binding(12)
var environment_map_specular: texture_cube<f32>;
var environment_map_diffuse: texture_cube<f32>;
@group(0) @binding(13)
var environment_map_specular: texture_cube<f32>;
@group(0) @binding(14)
var environment_map_sampler: sampler;
@group(0) @binding(14)
var dt_lut_texture: texture_3d<f32>;
@group(0) @binding(15)
var dt_lut_texture: texture_3d<f32>;
@group(0) @binding(16)
var dt_lut_sampler: sampler;
#ifdef MULTISAMPLED
@group(0) @binding(16)
var depth_prepass_texture: texture_depth_multisampled_2d;
@group(0) @binding(17)
var normal_prepass_texture: texture_multisampled_2d<f32>;
var depth_prepass_texture: texture_depth_multisampled_2d;
@group(0) @binding(18)
var normal_prepass_texture: texture_multisampled_2d<f32>;
@group(0) @binding(19)
var motion_vector_prepass_texture: texture_multisampled_2d<f32>;
#else
@group(0) @binding(16)
var depth_prepass_texture: texture_depth_2d;
@group(0) @binding(17)
var normal_prepass_texture: texture_2d<f32>;
var depth_prepass_texture: texture_depth_2d;
@group(0) @binding(18)
var normal_prepass_texture: texture_2d<f32>;
@group(0) @binding(19)
var motion_vector_prepass_texture: texture_2d<f32>;
#endif

View file

@ -13,6 +13,10 @@
#import bevy_pbr::prepass_utils
#ifdef SCREEN_SPACE_AMBIENT_OCCLUSION
#import bevy_pbr::gtao_utils
#endif
struct FragmentInput {
@builtin(front_facing) is_front: bool,
@builtin(position) frag_coord: vec4<f32>,
@ -88,12 +92,20 @@ fn fragment(in: FragmentInput) -> @location(0) vec4<f32> {
pbr_input.material.metallic = metallic;
pbr_input.material.perceptual_roughness = perceptual_roughness;
var occlusion: f32 = 1.0;
// TODO: Split into diffuse/specular occlusion?
var occlusion: vec3<f32> = vec3(1.0);
#ifdef VERTEX_UVS
if ((material.flags & STANDARD_MATERIAL_FLAGS_OCCLUSION_TEXTURE_BIT) != 0u) {
occlusion = textureSample(occlusion_texture, occlusion_sampler, uv).r;
occlusion = vec3(textureSample(occlusion_texture, occlusion_sampler, in.uv).r);
}
#endif
#ifdef SCREEN_SPACE_AMBIENT_OCCLUSION
let ssao = textureLoad(screen_space_ambient_occlusion_texture, vec2<i32>(in.frag_coord.xy), 0i).r;
let ssao_multibounce = gtao_multibounce(ssao, pbr_input.material.base_color.rgb);
occlusion = min(occlusion, ssao_multibounce);
#endif
pbr_input.occlusion = occlusion;
pbr_input.frag_coord = in.frag_coord;
pbr_input.world_position = in.world_position;

View file

@ -10,7 +10,7 @@ fn ambient_light(
diffuse_color: vec3<f32>,
specular_color: vec3<f32>,
perceptual_roughness: f32,
occlusion: f32,
occlusion: vec3<f32>,
) -> vec3<f32> {
let diffuse_ambient = EnvBRDFApprox(diffuse_color, F_AB(1.0, NdotV)) * occlusion;
let specular_ambient = EnvBRDFApprox(specular_color, F_AB(perceptual_roughness, NdotV));

View file

@ -126,7 +126,7 @@ fn calculate_view(
struct PbrInput {
material: StandardMaterial,
occlusion: f32,
occlusion: vec3<f32>,
frag_coord: vec4<f32>,
world_position: vec4<f32>,
// Normalized world normal used for shadow mapping as normal-mapping is not used for shadow
@ -146,7 +146,7 @@ fn pbr_input_new() -> PbrInput {
var pbr_input: PbrInput;
pbr_input.material = standard_material_new();
pbr_input.occlusion = 1.0;
pbr_input.occlusion = vec3<f32>(1.0);
pbr_input.frag_coord = vec4<f32>(0.0, 0.0, 0.0, 1.0);
pbr_input.world_position = vec4<f32>(0.0, 0.0, 0.0, 1.0);

View file

@ -1,6 +1,7 @@
#define_import_path bevy_pbr::utils
const PI: f32 = 3.141592653589793;
const HALF_PI: f32 = 1.57079632679;
const E: f32 = 2.718281828459045;
fn hsv2rgb(hue: f32, saturation: f32, value: f32) -> vec3<f32> {

View file

@ -0,0 +1,176 @@
// Ground Truth-based Ambient Occlusion (GTAO)
// Paper: https://www.activision.com/cdn/research/Practical_Real_Time_Strategies_for_Accurate_Indirect_Occlusion_NEW%20VERSION_COLOR.pdf
// Presentation: https://blog.selfshadow.com/publications/s2016-shading-course/activision/s2016_pbs_activision_occlusion.pdf
// Source code heavily based on XeGTAO v1.30 from Intel
// https://github.com/GameTechDev/XeGTAO/blob/0d177ce06bfa642f64d8af4de1197ad1bcb862d4/Source/Rendering/Shaders/XeGTAO.hlsli
#import bevy_pbr::gtao_utils
#import bevy_pbr::utils
#import bevy_render::view
#import bevy_render::globals
@group(0) @binding(0) var preprocessed_depth: texture_2d<f32>;
@group(0) @binding(1) var normals: texture_2d<f32>;
@group(0) @binding(2) var hilbert_index_lut: texture_2d<u32>;
@group(0) @binding(3) var ambient_occlusion: texture_storage_2d<r16float, write>;
@group(0) @binding(4) var depth_differences: texture_storage_2d<r32uint, write>;
@group(0) @binding(5) var<uniform> globals: Globals;
@group(1) @binding(0) var point_clamp_sampler: sampler;
@group(1) @binding(1) var<uniform> view: View;
fn load_noise(pixel_coordinates: vec2<i32>) -> vec2<f32> {
var index = textureLoad(hilbert_index_lut, pixel_coordinates % 64, 0).r;
#ifdef TEMPORAL_NOISE
index += 288u * (globals.frame_count % 64u);
#endif
// R2 sequence - http://extremelearning.com.au/unreasonable-effectiveness-of-quasirandom-sequences
return fract(0.5 + f32(index) * vec2<f32>(0.75487766624669276005, 0.5698402909980532659114));
}
// Calculate differences in depth between neighbor pixels (later used by the spatial denoiser pass to preserve object edges)
fn calculate_neighboring_depth_differences(pixel_coordinates: vec2<i32>) -> f32 {
// Sample the pixel's depth and 4 depths around it
let uv = vec2<f32>(pixel_coordinates) / view.viewport.zw;
let depths_upper_left = textureGather(0, preprocessed_depth, point_clamp_sampler, uv);
let depths_bottom_right = textureGather(0, preprocessed_depth, point_clamp_sampler, uv, vec2<i32>(1i, 1i));
let depth_center = depths_upper_left.y;
let depth_left = depths_upper_left.x;
let depth_top = depths_upper_left.z;
let depth_bottom = depths_bottom_right.x;
let depth_right = depths_bottom_right.z;
// Calculate the depth differences (large differences represent object edges)
var edge_info = vec4<f32>(depth_left, depth_right, depth_top, depth_bottom) - depth_center;
let slope_left_right = (edge_info.y - edge_info.x) * 0.5;
let slope_top_bottom = (edge_info.w - edge_info.z) * 0.5;
let edge_info_slope_adjusted = edge_info + vec4<f32>(slope_left_right, -slope_left_right, slope_top_bottom, -slope_top_bottom);
edge_info = min(abs(edge_info), abs(edge_info_slope_adjusted));
let bias = 0.25; // Using the bias and then saturating nudges the values a bit
let scale = depth_center * 0.011; // Weight the edges by their distance from the camera
edge_info = saturate((1.0 + bias) - edge_info / scale); // Apply the bias and scale, and invert edge_info so that small values become large, and vice versa
// Pack the edge info into the texture
let edge_info_packed = vec4<u32>(mypack4x8unorm(edge_info), 0u, 0u, 0u);
textureStore(depth_differences, pixel_coordinates, edge_info_packed);
return depth_center;
}
// TODO: Remove this once https://github.com/gfx-rs/naga/pull/2353 lands
fn mypack4x8unorm(e: vec4<f32>) -> u32 {
return u32(clamp(e.x, 0.0, 1.0) * 255.0 + 0.5) |
u32(clamp(e.y, 0.0, 1.0) * 255.0 + 0.5) << 8u |
u32(clamp(e.z, 0.0, 1.0) * 255.0 + 0.5) << 16u |
u32(clamp(e.w, 0.0, 1.0) * 255.0 + 0.5) << 24u;
}
fn load_normal_view_space(uv: vec2<f32>) -> vec3<f32> {
var world_normal = textureSampleLevel(normals, point_clamp_sampler, uv, 0.0).xyz;
world_normal = (world_normal * 2.0) - 1.0;
let inverse_view = mat3x3<f32>(
view.inverse_view[0].xyz,
view.inverse_view[1].xyz,
view.inverse_view[2].xyz,
);
return inverse_view * world_normal;
}
fn reconstruct_view_space_position(depth: f32, uv: vec2<f32>) -> vec3<f32> {
let clip_xy = vec2<f32>(uv.x * 2.0 - 1.0, 1.0 - 2.0 * uv.y);
let t = view.inverse_projection * vec4<f32>(clip_xy, depth, 1.0);
let view_xyz = t.xyz / t.w;
return view_xyz;
}
fn load_and_reconstruct_view_space_position(uv: vec2<f32>, sample_mip_level: f32) -> vec3<f32> {
let depth = textureSampleLevel(preprocessed_depth, point_clamp_sampler, uv, sample_mip_level).r;
return reconstruct_view_space_position(depth, uv);
}
@compute
@workgroup_size(8, 8, 1)
fn gtao(@builtin(global_invocation_id) global_id: vec3<u32>) {
let slice_count = f32(#SLICE_COUNT);
let samples_per_slice_side = f32(#SAMPLES_PER_SLICE_SIDE);
let effect_radius = 0.5 * 1.457;
let falloff_range = 0.615 * effect_radius;
let falloff_from = effect_radius * (1.0 - 0.615);
let falloff_mul = -1.0 / falloff_range;
let falloff_add = falloff_from / falloff_range + 1.0;
let pixel_coordinates = vec2<i32>(global_id.xy);
let uv = (vec2<f32>(pixel_coordinates) + 0.5) / view.viewport.zw;
var pixel_depth = calculate_neighboring_depth_differences(pixel_coordinates);
pixel_depth += 0.00001; // Avoid depth precision issues
let pixel_position = reconstruct_view_space_position(pixel_depth, uv);
let pixel_normal = load_normal_view_space(uv);
let view_vec = normalize(-pixel_position);
let noise = load_noise(pixel_coordinates);
let sample_scale = (-0.5 * effect_radius * view.projection[0][0]) / pixel_position.z;
var visibility = 0.0;
for (var slice_t = 0.0; slice_t < slice_count; slice_t += 1.0) {
let slice = slice_t + noise.x;
let phi = (PI / slice_count) * slice;
let omega = vec2<f32>(cos(phi), sin(phi));
let direction = vec3<f32>(omega.xy, 0.0);
let orthographic_direction = direction - (dot(direction, view_vec) * view_vec);
let axis = cross(direction, view_vec);
let projected_normal = pixel_normal - axis * dot(pixel_normal, axis);
let projected_normal_length = length(projected_normal);
let sign_norm = sign(dot(orthographic_direction, projected_normal));
let cos_norm = saturate(dot(projected_normal, view_vec) / projected_normal_length);
let n = sign_norm * fast_acos(cos_norm);
let min_cos_horizon_1 = cos(n + HALF_PI);
let min_cos_horizon_2 = cos(n - HALF_PI);
var cos_horizon_1 = min_cos_horizon_1;
var cos_horizon_2 = min_cos_horizon_2;
let sample_mul = vec2<f32>(omega.x, -omega.y) * sample_scale;
for (var sample_t = 0.0; sample_t < samples_per_slice_side; sample_t += 1.0) {
var sample_noise = (slice_t + sample_t * samples_per_slice_side) * 0.6180339887498948482;
sample_noise = fract(noise.y + sample_noise);
var s = (sample_t + sample_noise) / samples_per_slice_side;
s *= s; // https://github.com/GameTechDev/XeGTAO#sample-distribution
let sample = s * sample_mul;
let sample_mip_level = clamp(log2(length(sample)) - 3.3, 0.0, 5.0); // https://github.com/GameTechDev/XeGTAO#memory-bandwidth-bottleneck
let sample_position_1 = load_and_reconstruct_view_space_position(uv + sample, sample_mip_level);
let sample_position_2 = load_and_reconstruct_view_space_position(uv - sample, sample_mip_level);
let sample_difference_1 = sample_position_1 - pixel_position;
let sample_difference_2 = sample_position_2 - pixel_position;
let sample_distance_1 = length(sample_difference_1);
let sample_distance_2 = length(sample_difference_2);
var sample_cos_horizon_1 = dot(sample_difference_1 / sample_distance_1, view_vec);
var sample_cos_horizon_2 = dot(sample_difference_2 / sample_distance_2, view_vec);
let weight_1 = saturate(sample_distance_1 * falloff_mul + falloff_add);
let weight_2 = saturate(sample_distance_2 * falloff_mul + falloff_add);
sample_cos_horizon_1 = mix(min_cos_horizon_1, sample_cos_horizon_1, weight_1);
sample_cos_horizon_2 = mix(min_cos_horizon_2, sample_cos_horizon_2, weight_2);
cos_horizon_1 = max(cos_horizon_1, sample_cos_horizon_1);
cos_horizon_2 = max(cos_horizon_2, sample_cos_horizon_2);
}
let horizon_1 = fast_acos(cos_horizon_1);
let horizon_2 = -fast_acos(cos_horizon_2);
let v1 = (cos_norm + 2.0 * horizon_1 * sin(n) - cos(2.0 * horizon_1 - n)) / 4.0;
let v2 = (cos_norm + 2.0 * horizon_2 * sin(n) - cos(2.0 * horizon_2 - n)) / 4.0;
visibility += projected_normal_length * (v1 + v2);
}
visibility /= slice_count;
visibility = clamp(visibility, 0.03, 1.0);
textureStore(ambient_occlusion, pixel_coordinates, vec4<f32>(visibility, 0.0, 0.0, 0.0));
}

View file

@ -0,0 +1,22 @@
#define_import_path bevy_pbr::gtao_utils
// Approximates single-bounce ambient occlusion to multi-bounce ambient occlusion
// https://blog.selfshadow.com/publications/s2016-shading-course/activision/s2016_pbs_activision_occlusion.pdf#page=78
fn gtao_multibounce(visibility: f32, base_color: vec3<f32>) -> vec3<f32> {
let a = 2.0404 * base_color - 0.3324;
let b = -4.7951 * base_color + 0.6417;
let c = 2.7552 * base_color + 0.6903;
let x = vec3<f32>(visibility);
return max(x, ((x * a + b) * x + c) * x);
}
fn fast_sqrt(x: f32) -> f32 {
return bitcast<f32>(0x1fbd1df5 + (bitcast<i32>(x) >> 1u));
}
fn fast_acos(in_x: f32) -> f32 {
let x = abs(in_x);
var res = -0.156583 * x + HALF_PI;
res *= fast_sqrt(1.0 - x);
return select(PI - res, res, in_x >= 0.0);
}

View file

@ -0,0 +1,989 @@
use bevy_app::{App, Plugin};
use bevy_asset::{load_internal_asset, HandleUntyped};
use bevy_core_pipeline::{
core_3d::CORE_3D,
prelude::Camera3d,
prepass::{DepthPrepass, NormalPrepass, ViewPrepassTextures},
};
use bevy_ecs::{
prelude::{Bundle, Component, Entity},
query::{QueryItem, With},
reflect::ReflectComponent,
schedule::IntoSystemConfigs,
system::{Commands, Query, Res, ResMut, Resource},
world::{FromWorld, World},
};
use bevy_reflect::{Reflect, TypeUuid};
use bevy_render::{
camera::{ExtractedCamera, TemporalJitter},
extract_component::ExtractComponent,
globals::{GlobalsBuffer, GlobalsUniform},
prelude::Camera,
render_graph::{NodeRunError, RenderGraphApp, RenderGraphContext, ViewNode, ViewNodeRunner},
render_resource::{
AddressMode, BindGroup, BindGroupDescriptor, BindGroupEntry, BindGroupLayout,
BindGroupLayoutDescriptor, BindGroupLayoutEntry, BindingResource, BindingType,
BufferBindingType, CachedComputePipelineId, ComputePassDescriptor,
ComputePipelineDescriptor, Extent3d, FilterMode, PipelineCache, Sampler,
SamplerBindingType, SamplerDescriptor, Shader, ShaderDefVal, ShaderStages, ShaderType,
SpecializedComputePipeline, SpecializedComputePipelines, StorageTextureAccess,
TextureDescriptor, TextureDimension, TextureFormat, TextureSampleType, TextureUsages,
TextureView, TextureViewDescriptor, TextureViewDimension,
},
renderer::{RenderAdapter, RenderContext, RenderDevice, RenderQueue},
texture::{CachedTexture, TextureCache},
view::{Msaa, ViewUniform, ViewUniformOffset, ViewUniforms},
Extract, ExtractSchedule, Render, RenderApp, RenderSet,
};
use bevy_utils::{
prelude::default,
tracing::{error, warn},
};
use std::mem;
pub mod draw_3d_graph {
pub mod node {
/// Label for the screen space ambient occlusion render node.
pub const SCREEN_SPACE_AMBIENT_OCCLUSION: &str = "screen_space_ambient_occlusion";
}
}
const PREPROCESS_DEPTH_SHADER_HANDLE: HandleUntyped =
HandleUntyped::weak_from_u64(Shader::TYPE_UUID, 102258915420479);
const GTAO_SHADER_HANDLE: HandleUntyped =
HandleUntyped::weak_from_u64(Shader::TYPE_UUID, 253938746510568);
const SPATIAL_DENOISE_SHADER_HANDLE: HandleUntyped =
HandleUntyped::weak_from_u64(Shader::TYPE_UUID, 466162052558226);
const GTAO_UTILS_SHADER_HANDLE: HandleUntyped =
HandleUntyped::weak_from_u64(Shader::TYPE_UUID, 366465052568786);
/// Plugin for screen space ambient occlusion.
pub struct ScreenSpaceAmbientOcclusionPlugin;
impl Plugin for ScreenSpaceAmbientOcclusionPlugin {
fn build(&self, app: &mut App) {
load_internal_asset!(
app,
PREPROCESS_DEPTH_SHADER_HANDLE,
"preprocess_depth.wgsl",
Shader::from_wgsl
);
load_internal_asset!(app, GTAO_SHADER_HANDLE, "gtao.wgsl", Shader::from_wgsl);
load_internal_asset!(
app,
SPATIAL_DENOISE_SHADER_HANDLE,
"spatial_denoise.wgsl",
Shader::from_wgsl
);
load_internal_asset!(
app,
GTAO_UTILS_SHADER_HANDLE,
"gtao_utils.wgsl",
Shader::from_wgsl
);
app.register_type::<ScreenSpaceAmbientOcclusionSettings>();
}
fn finish(&self, app: &mut App) {
let Ok(render_app) = app.get_sub_app_mut(RenderApp) else { return };
if !render_app
.world
.resource::<RenderAdapter>()
.get_texture_format_features(TextureFormat::R16Float)
.allowed_usages
.contains(TextureUsages::STORAGE_BINDING)
{
warn!("ScreenSpaceAmbientOcclusionPlugin not loaded. GPU lacks support: TextureFormat::R16Float does not support TextureUsages::STORAGE_BINDING.");
return;
}
if render_app
.world
.resource::<RenderDevice>()
.limits()
.max_storage_textures_per_shader_stage
< 5
{
warn!("ScreenSpaceAmbientOcclusionPlugin not loaded. GPU lacks support: Limits::max_storage_textures_per_shader_stage is less than 5.");
return;
}
render_app
.init_resource::<SsaoPipelines>()
.init_resource::<SpecializedComputePipelines<SsaoPipelines>>()
.add_systems(ExtractSchedule, extract_ssao_settings)
.add_systems(Render, prepare_ssao_textures.in_set(RenderSet::Prepare))
.add_systems(Render, prepare_ssao_pipelines.in_set(RenderSet::Prepare))
.add_systems(Render, queue_ssao_bind_groups.in_set(RenderSet::Queue))
.add_render_graph_node::<ViewNodeRunner<SsaoNode>>(
CORE_3D,
draw_3d_graph::node::SCREEN_SPACE_AMBIENT_OCCLUSION,
)
.add_render_graph_edges(
CORE_3D,
&[
// PREPASS -> SCREEN_SPACE_AMBIENT_OCCLUSION -> MAIN_PASS
bevy_core_pipeline::core_3d::graph::node::PREPASS,
draw_3d_graph::node::SCREEN_SPACE_AMBIENT_OCCLUSION,
bevy_core_pipeline::core_3d::graph::node::START_MAIN_PASS,
],
);
}
}
/// Bundle to apply screen space ambient occlusion.
#[derive(Bundle, Default)]
pub struct ScreenSpaceAmbientOcclusionBundle {
pub settings: ScreenSpaceAmbientOcclusionSettings,
pub depth_prepass: DepthPrepass,
pub normal_prepass: NormalPrepass,
}
/// Component to apply screen space ambient occlusion to a 3d camera.
///
/// Screen space ambient occlusion (SSAO) approximates small-scale,
/// local occlusion of _indirect_ diffuse light between objects, based on what's visible on-screen.
/// SSAO does not apply to direct lighting, such as point or directional lights.
///
/// This darkens creases, e.g. on staircases, and gives nice contact shadows
/// where objects meet, giving entities a more "grounded" feel.
///
/// # Usage Notes
///
/// Requires that you add [`ScreenSpaceAmbientOcclusionPlugin`] to your app,
/// and add the [`DepthPrepass`] and [`NormalPrepass`] components to your camera.
///
/// It strongly recommended that you use SSAO in conjunction with
/// TAA ([`bevy_core_pipeline::experimental::taa::TemporalAntiAliasSettings`]).
/// Doing so greatly reduces SSAO noise.
///
/// SSAO is not supported on `WebGL2`, and is not currently supported on `WebGPU` or `DirectX12`.
#[derive(Component, ExtractComponent, Reflect, PartialEq, Eq, Hash, Clone, Default)]
#[reflect(Component)]
pub struct ScreenSpaceAmbientOcclusionSettings {
pub quality_level: ScreenSpaceAmbientOcclusionQualityLevel,
}
#[derive(Reflect, PartialEq, Eq, Hash, Clone, Copy, Default)]
pub enum ScreenSpaceAmbientOcclusionQualityLevel {
Low,
Medium,
#[default]
High,
Ultra,
Custom {
/// Higher slice count means less noise, but worse performance.
slice_count: u32,
/// Samples per slice side is also tweakable, but recommended to be left at 2 or 3.
samples_per_slice_side: u32,
},
}
impl ScreenSpaceAmbientOcclusionQualityLevel {
fn sample_counts(&self) -> (u32, u32) {
match self {
Self::Low => (1, 2), // 4 spp (1 * (2 * 2)), plus optional temporal samples
Self::Medium => (2, 2), // 8 spp (2 * (2 * 2)), plus optional temporal samples
Self::High => (3, 3), // 18 spp (3 * (3 * 2)), plus optional temporal samples
Self::Ultra => (9, 3), // 54 spp (9 * (3 * 2)), plus optional temporal samples
Self::Custom {
slice_count: slices,
samples_per_slice_side,
} => (*slices, *samples_per_slice_side),
}
}
}
#[derive(Default)]
struct SsaoNode {}
impl ViewNode for SsaoNode {
type ViewQuery = (
&'static ExtractedCamera,
&'static SsaoPipelineId,
&'static SsaoBindGroups,
&'static ViewUniformOffset,
);
fn run(
&self,
_graph: &mut RenderGraphContext,
render_context: &mut RenderContext,
(camera, pipeline_id, bind_groups, view_uniform_offset): QueryItem<Self::ViewQuery>,
world: &World,
) -> Result<(), NodeRunError> {
let pipelines = world.resource::<SsaoPipelines>();
let pipeline_cache = world.resource::<PipelineCache>();
let (
Some(camera_size),
Some(preprocess_depth_pipeline),
Some(spatial_denoise_pipeline),
Some(gtao_pipeline),
) = (
camera.physical_viewport_size,
pipeline_cache.get_compute_pipeline(pipelines.preprocess_depth_pipeline),
pipeline_cache.get_compute_pipeline(pipelines.spatial_denoise_pipeline),
pipeline_cache.get_compute_pipeline(pipeline_id.0),
) else {
return Ok(());
};
render_context.command_encoder().push_debug_group("ssao");
{
let mut preprocess_depth_pass =
render_context
.command_encoder()
.begin_compute_pass(&ComputePassDescriptor {
label: Some("ssao_preprocess_depth_pass"),
});
preprocess_depth_pass.set_pipeline(preprocess_depth_pipeline);
preprocess_depth_pass.set_bind_group(0, &bind_groups.preprocess_depth_bind_group, &[]);
preprocess_depth_pass.set_bind_group(
1,
&bind_groups.common_bind_group,
&[view_uniform_offset.offset],
);
preprocess_depth_pass.dispatch_workgroups(
div_ceil(camera_size.x, 16),
div_ceil(camera_size.y, 16),
1,
);
}
{
let mut gtao_pass =
render_context
.command_encoder()
.begin_compute_pass(&ComputePassDescriptor {
label: Some("ssao_gtao_pass"),
});
gtao_pass.set_pipeline(gtao_pipeline);
gtao_pass.set_bind_group(0, &bind_groups.gtao_bind_group, &[]);
gtao_pass.set_bind_group(
1,
&bind_groups.common_bind_group,
&[view_uniform_offset.offset],
);
gtao_pass.dispatch_workgroups(
div_ceil(camera_size.x, 8),
div_ceil(camera_size.y, 8),
1,
);
}
{
let mut spatial_denoise_pass =
render_context
.command_encoder()
.begin_compute_pass(&ComputePassDescriptor {
label: Some("ssao_spatial_denoise_pass"),
});
spatial_denoise_pass.set_pipeline(spatial_denoise_pipeline);
spatial_denoise_pass.set_bind_group(0, &bind_groups.spatial_denoise_bind_group, &[]);
spatial_denoise_pass.set_bind_group(
1,
&bind_groups.common_bind_group,
&[view_uniform_offset.offset],
);
spatial_denoise_pass.dispatch_workgroups(
div_ceil(camera_size.x, 8),
div_ceil(camera_size.y, 8),
1,
);
}
render_context.command_encoder().pop_debug_group();
Ok(())
}
}
#[derive(Resource)]
struct SsaoPipelines {
preprocess_depth_pipeline: CachedComputePipelineId,
spatial_denoise_pipeline: CachedComputePipelineId,
common_bind_group_layout: BindGroupLayout,
preprocess_depth_bind_group_layout: BindGroupLayout,
gtao_bind_group_layout: BindGroupLayout,
spatial_denoise_bind_group_layout: BindGroupLayout,
hilbert_index_lut: TextureView,
point_clamp_sampler: Sampler,
}
impl FromWorld for SsaoPipelines {
fn from_world(world: &mut World) -> Self {
let render_device = world.resource::<RenderDevice>();
let render_queue = world.resource::<RenderQueue>();
let pipeline_cache = world.resource::<PipelineCache>();
let hilbert_index_lut = render_device
.create_texture_with_data(
render_queue,
&(TextureDescriptor {
label: Some("ssao_hilbert_index_lut"),
size: Extent3d {
width: HILBERT_WIDTH as u32,
height: HILBERT_WIDTH as u32,
depth_or_array_layers: 1,
},
mip_level_count: 1,
sample_count: 1,
dimension: TextureDimension::D2,
format: TextureFormat::R16Uint,
usage: TextureUsages::TEXTURE_BINDING,
view_formats: &[],
}),
bytemuck::cast_slice(&generate_hilbert_index_lut()),
)
.create_view(&TextureViewDescriptor::default());
let point_clamp_sampler = render_device.create_sampler(&SamplerDescriptor {
min_filter: FilterMode::Nearest,
mag_filter: FilterMode::Nearest,
mipmap_filter: FilterMode::Nearest,
address_mode_u: AddressMode::ClampToEdge,
address_mode_v: AddressMode::ClampToEdge,
..Default::default()
});
let common_bind_group_layout =
render_device.create_bind_group_layout(&BindGroupLayoutDescriptor {
label: Some("ssao_common_bind_group_layout"),
entries: &[
BindGroupLayoutEntry {
binding: 0,
visibility: ShaderStages::COMPUTE,
ty: BindingType::Sampler(SamplerBindingType::NonFiltering),
count: None,
},
BindGroupLayoutEntry {
binding: 1,
visibility: ShaderStages::COMPUTE,
ty: BindingType::Buffer {
ty: BufferBindingType::Uniform,
has_dynamic_offset: true,
min_binding_size: Some(ViewUniform::min_size()),
},
count: None,
},
],
});
let mip_texture_entry = BindGroupLayoutEntry {
binding: 1,
visibility: ShaderStages::COMPUTE,
ty: BindingType::StorageTexture {
access: StorageTextureAccess::WriteOnly,
format: TextureFormat::R16Float,
view_dimension: TextureViewDimension::D2,
},
count: None,
};
let preprocess_depth_bind_group_layout =
render_device.create_bind_group_layout(&BindGroupLayoutDescriptor {
label: Some("ssao_preprocess_depth_bind_group_layout"),
entries: &[
BindGroupLayoutEntry {
binding: 0,
visibility: ShaderStages::COMPUTE,
ty: BindingType::Texture {
sample_type: TextureSampleType::Depth,
view_dimension: TextureViewDimension::D2,
multisampled: false,
},
count: None,
},
mip_texture_entry,
BindGroupLayoutEntry {
binding: 2,
..mip_texture_entry
},
BindGroupLayoutEntry {
binding: 3,
..mip_texture_entry
},
BindGroupLayoutEntry {
binding: 4,
..mip_texture_entry
},
BindGroupLayoutEntry {
binding: 5,
..mip_texture_entry
},
],
});
let gtao_bind_group_layout =
render_device.create_bind_group_layout(&BindGroupLayoutDescriptor {
label: Some("ssao_gtao_bind_group_layout"),
entries: &[
BindGroupLayoutEntry {
binding: 0,
visibility: ShaderStages::COMPUTE,
ty: BindingType::Texture {
sample_type: TextureSampleType::Float { filterable: false },
view_dimension: TextureViewDimension::D2,
multisampled: false,
},
count: None,
},
BindGroupLayoutEntry {
binding: 1,
visibility: ShaderStages::COMPUTE,
ty: BindingType::Texture {
sample_type: TextureSampleType::Float { filterable: false },
view_dimension: TextureViewDimension::D2,
multisampled: false,
},
count: None,
},
BindGroupLayoutEntry {
binding: 2,
visibility: ShaderStages::COMPUTE,
ty: BindingType::Texture {
sample_type: TextureSampleType::Uint,
view_dimension: TextureViewDimension::D2,
multisampled: false,
},
count: None,
},
BindGroupLayoutEntry {
binding: 3,
visibility: ShaderStages::COMPUTE,
ty: BindingType::StorageTexture {
access: StorageTextureAccess::WriteOnly,
format: TextureFormat::R16Float,
view_dimension: TextureViewDimension::D2,
},
count: None,
},
BindGroupLayoutEntry {
binding: 4,
visibility: ShaderStages::COMPUTE,
ty: BindingType::StorageTexture {
access: StorageTextureAccess::WriteOnly,
format: TextureFormat::R32Uint,
view_dimension: TextureViewDimension::D2,
},
count: None,
},
BindGroupLayoutEntry {
binding: 5,
visibility: ShaderStages::COMPUTE,
ty: BindingType::Buffer {
ty: BufferBindingType::Uniform,
has_dynamic_offset: false,
min_binding_size: Some(GlobalsUniform::min_size()),
},
count: None,
},
],
});
let spatial_denoise_bind_group_layout =
render_device.create_bind_group_layout(&BindGroupLayoutDescriptor {
label: Some("ssao_spatial_denoise_bind_group_layout"),
entries: &[
BindGroupLayoutEntry {
binding: 0,
visibility: ShaderStages::COMPUTE,
ty: BindingType::Texture {
sample_type: TextureSampleType::Float { filterable: false },
view_dimension: TextureViewDimension::D2,
multisampled: false,
},
count: None,
},
BindGroupLayoutEntry {
binding: 1,
visibility: ShaderStages::COMPUTE,
ty: BindingType::Texture {
sample_type: TextureSampleType::Uint,
view_dimension: TextureViewDimension::D2,
multisampled: false,
},
count: None,
},
BindGroupLayoutEntry {
binding: 2,
visibility: ShaderStages::COMPUTE,
ty: BindingType::StorageTexture {
access: StorageTextureAccess::WriteOnly,
format: TextureFormat::R16Float,
view_dimension: TextureViewDimension::D2,
},
count: None,
},
],
});
let preprocess_depth_pipeline =
pipeline_cache.queue_compute_pipeline(ComputePipelineDescriptor {
label: Some("ssao_preprocess_depth_pipeline".into()),
layout: vec![
preprocess_depth_bind_group_layout.clone(),
common_bind_group_layout.clone(),
],
push_constant_ranges: vec![],
shader: PREPROCESS_DEPTH_SHADER_HANDLE.typed(),
shader_defs: Vec::new(),
entry_point: "preprocess_depth".into(),
});
let spatial_denoise_pipeline =
pipeline_cache.queue_compute_pipeline(ComputePipelineDescriptor {
label: Some("ssao_spatial_denoise_pipeline".into()),
layout: vec![
spatial_denoise_bind_group_layout.clone(),
common_bind_group_layout.clone(),
],
push_constant_ranges: vec![],
shader: SPATIAL_DENOISE_SHADER_HANDLE.typed(),
shader_defs: Vec::new(),
entry_point: "spatial_denoise".into(),
});
Self {
preprocess_depth_pipeline,
spatial_denoise_pipeline,
common_bind_group_layout,
preprocess_depth_bind_group_layout,
gtao_bind_group_layout,
spatial_denoise_bind_group_layout,
hilbert_index_lut,
point_clamp_sampler,
}
}
}
#[derive(PartialEq, Eq, Hash, Clone)]
struct SsaoPipelineKey {
ssao_settings: ScreenSpaceAmbientOcclusionSettings,
temporal_noise: bool,
}
impl SpecializedComputePipeline for SsaoPipelines {
type Key = SsaoPipelineKey;
fn specialize(&self, key: Self::Key) -> ComputePipelineDescriptor {
let (slice_count, samples_per_slice_side) = key.ssao_settings.quality_level.sample_counts();
let mut shader_defs = vec![
ShaderDefVal::Int("SLICE_COUNT".to_string(), slice_count as i32),
ShaderDefVal::Int(
"SAMPLES_PER_SLICE_SIDE".to_string(),
samples_per_slice_side as i32,
),
];
if key.temporal_noise {
shader_defs.push("TEMPORAL_NOISE".into());
}
ComputePipelineDescriptor {
label: Some("ssao_gtao_pipeline".into()),
layout: vec![
self.gtao_bind_group_layout.clone(),
self.common_bind_group_layout.clone(),
],
push_constant_ranges: vec![],
shader: GTAO_SHADER_HANDLE.typed(),
shader_defs,
entry_point: "gtao".into(),
}
}
}
fn extract_ssao_settings(
mut commands: Commands,
cameras: Extract<
Query<
(Entity, &Camera, &ScreenSpaceAmbientOcclusionSettings),
(With<Camera3d>, With<DepthPrepass>, With<NormalPrepass>),
>,
>,
msaa: Extract<Res<Msaa>>,
) {
for (entity, camera, ssao_settings) in &cameras {
if **msaa != Msaa::Off {
error!(
"SSAO is being used which requires Msaa::Off, but Msaa is currently set to Msaa::{:?}",
**msaa
);
return;
}
if camera.is_active {
commands.get_or_spawn(entity).insert(ssao_settings.clone());
}
}
}
#[derive(Component)]
pub struct ScreenSpaceAmbientOcclusionTextures {
preprocessed_depth_texture: CachedTexture,
ssao_noisy_texture: CachedTexture, // Pre-spatially denoised texture
pub screen_space_ambient_occlusion_texture: CachedTexture, // Spatially denoised texture
depth_differences_texture: CachedTexture,
}
fn prepare_ssao_textures(
mut commands: Commands,
mut texture_cache: ResMut<TextureCache>,
render_device: Res<RenderDevice>,
views: Query<(Entity, &ExtractedCamera), With<ScreenSpaceAmbientOcclusionSettings>>,
) {
for (entity, camera) in &views {
let Some(physical_viewport_size) = camera.physical_viewport_size else { continue };
let size = Extent3d {
width: physical_viewport_size.x,
height: physical_viewport_size.y,
depth_or_array_layers: 1,
};
let preprocessed_depth_texture = texture_cache.get(
&render_device,
TextureDescriptor {
label: Some("ssao_preprocessed_depth_texture"),
size,
mip_level_count: 5,
sample_count: 1,
dimension: TextureDimension::D2,
format: TextureFormat::R16Float,
usage: TextureUsages::STORAGE_BINDING | TextureUsages::TEXTURE_BINDING,
view_formats: &[],
},
);
let ssao_noisy_texture = texture_cache.get(
&render_device,
TextureDescriptor {
label: Some("ssao_noisy_texture"),
size,
mip_level_count: 1,
sample_count: 1,
dimension: TextureDimension::D2,
format: TextureFormat::R16Float,
usage: TextureUsages::STORAGE_BINDING | TextureUsages::TEXTURE_BINDING,
view_formats: &[],
},
);
let ssao_texture = texture_cache.get(
&render_device,
TextureDescriptor {
label: Some("ssao_texture"),
size,
mip_level_count: 1,
sample_count: 1,
dimension: TextureDimension::D2,
format: TextureFormat::R16Float,
usage: TextureUsages::STORAGE_BINDING | TextureUsages::TEXTURE_BINDING,
view_formats: &[],
},
);
let depth_differences_texture = texture_cache.get(
&render_device,
TextureDescriptor {
label: Some("ssao_depth_differences_texture"),
size,
mip_level_count: 1,
sample_count: 1,
dimension: TextureDimension::D2,
format: TextureFormat::R32Uint,
usage: TextureUsages::STORAGE_BINDING | TextureUsages::TEXTURE_BINDING,
view_formats: &[],
},
);
commands
.entity(entity)
.insert(ScreenSpaceAmbientOcclusionTextures {
preprocessed_depth_texture,
ssao_noisy_texture,
screen_space_ambient_occlusion_texture: ssao_texture,
depth_differences_texture,
});
}
}
#[derive(Component)]
struct SsaoPipelineId(CachedComputePipelineId);
fn prepare_ssao_pipelines(
mut commands: Commands,
pipeline_cache: Res<PipelineCache>,
mut pipelines: ResMut<SpecializedComputePipelines<SsaoPipelines>>,
pipeline: Res<SsaoPipelines>,
views: Query<(
Entity,
&ScreenSpaceAmbientOcclusionSettings,
Option<&TemporalJitter>,
)>,
) {
for (entity, ssao_settings, temporal_jitter) in &views {
let pipeline_id = pipelines.specialize(
&pipeline_cache,
&pipeline,
SsaoPipelineKey {
ssao_settings: ssao_settings.clone(),
temporal_noise: temporal_jitter.is_some(),
},
);
commands.entity(entity).insert(SsaoPipelineId(pipeline_id));
}
}
#[derive(Component)]
struct SsaoBindGroups {
common_bind_group: BindGroup,
preprocess_depth_bind_group: BindGroup,
gtao_bind_group: BindGroup,
spatial_denoise_bind_group: BindGroup,
}
fn queue_ssao_bind_groups(
mut commands: Commands,
render_device: Res<RenderDevice>,
pipelines: Res<SsaoPipelines>,
view_uniforms: Res<ViewUniforms>,
global_uniforms: Res<GlobalsBuffer>,
views: Query<(
Entity,
&ScreenSpaceAmbientOcclusionTextures,
&ViewPrepassTextures,
)>,
) {
let (Some(view_uniforms), Some(globals_uniforms)) = (
view_uniforms.uniforms.binding(),
global_uniforms.buffer.binding(),
) else {
return;
};
for (entity, ssao_textures, prepass_textures) in &views {
let common_bind_group = render_device.create_bind_group(&BindGroupDescriptor {
label: Some("ssao_common_bind_group"),
layout: &pipelines.common_bind_group_layout,
entries: &[
BindGroupEntry {
binding: 0,
resource: BindingResource::Sampler(&pipelines.point_clamp_sampler),
},
BindGroupEntry {
binding: 1,
resource: view_uniforms.clone(),
},
],
});
let preprocess_depth_mip_view_descriptor = TextureViewDescriptor {
format: Some(TextureFormat::R16Float),
dimension: Some(TextureViewDimension::D2),
mip_level_count: Some(1),
..default()
};
let preprocess_depth_bind_group = render_device.create_bind_group(&BindGroupDescriptor {
label: Some("ssao_preprocess_depth_bind_group"),
layout: &pipelines.preprocess_depth_bind_group_layout,
entries: &[
BindGroupEntry {
binding: 0,
resource: BindingResource::TextureView(
&prepass_textures.depth.as_ref().unwrap().default_view,
),
},
BindGroupEntry {
binding: 1,
resource: BindingResource::TextureView(
&ssao_textures
.preprocessed_depth_texture
.texture
.create_view(&TextureViewDescriptor {
label: Some("ssao_preprocessed_depth_texture_mip_view_0"),
base_mip_level: 0,
..preprocess_depth_mip_view_descriptor
}),
),
},
BindGroupEntry {
binding: 2,
resource: BindingResource::TextureView(
&ssao_textures
.preprocessed_depth_texture
.texture
.create_view(&TextureViewDescriptor {
label: Some("ssao_preprocessed_depth_texture_mip_view_1"),
base_mip_level: 1,
..preprocess_depth_mip_view_descriptor
}),
),
},
BindGroupEntry {
binding: 3,
resource: BindingResource::TextureView(
&ssao_textures
.preprocessed_depth_texture
.texture
.create_view(&TextureViewDescriptor {
label: Some("ssao_preprocessed_depth_texture_mip_view_2"),
base_mip_level: 2,
..preprocess_depth_mip_view_descriptor
}),
),
},
BindGroupEntry {
binding: 4,
resource: BindingResource::TextureView(
&ssao_textures
.preprocessed_depth_texture
.texture
.create_view(&TextureViewDescriptor {
label: Some("ssao_preprocessed_depth_texture_mip_view_3"),
base_mip_level: 3,
..preprocess_depth_mip_view_descriptor
}),
),
},
BindGroupEntry {
binding: 5,
resource: BindingResource::TextureView(
&ssao_textures
.preprocessed_depth_texture
.texture
.create_view(&TextureViewDescriptor {
label: Some("ssao_preprocessed_depth_texture_mip_view_4"),
base_mip_level: 4,
..preprocess_depth_mip_view_descriptor
}),
),
},
],
});
let gtao_bind_group = render_device.create_bind_group(&BindGroupDescriptor {
label: Some("ssao_gtao_bind_group"),
layout: &pipelines.gtao_bind_group_layout,
entries: &[
BindGroupEntry {
binding: 0,
resource: BindingResource::TextureView(
&ssao_textures.preprocessed_depth_texture.default_view,
),
},
BindGroupEntry {
binding: 1,
resource: BindingResource::TextureView(
&prepass_textures.normal.as_ref().unwrap().default_view,
),
},
BindGroupEntry {
binding: 2,
resource: BindingResource::TextureView(&pipelines.hilbert_index_lut),
},
BindGroupEntry {
binding: 3,
resource: BindingResource::TextureView(
&ssao_textures.ssao_noisy_texture.default_view,
),
},
BindGroupEntry {
binding: 4,
resource: BindingResource::TextureView(
&ssao_textures.depth_differences_texture.default_view,
),
},
BindGroupEntry {
binding: 5,
resource: globals_uniforms.clone(),
},
],
});
let spatial_denoise_bind_group = render_device.create_bind_group(&BindGroupDescriptor {
label: Some("ssao_spatial_denoise_bind_group"),
layout: &pipelines.spatial_denoise_bind_group_layout,
entries: &[
BindGroupEntry {
binding: 0,
resource: BindingResource::TextureView(
&ssao_textures.ssao_noisy_texture.default_view,
),
},
BindGroupEntry {
binding: 1,
resource: BindingResource::TextureView(
&ssao_textures.depth_differences_texture.default_view,
),
},
BindGroupEntry {
binding: 2,
resource: BindingResource::TextureView(
&ssao_textures
.screen_space_ambient_occlusion_texture
.default_view,
),
},
],
});
commands.entity(entity).insert(SsaoBindGroups {
common_bind_group,
preprocess_depth_bind_group,
gtao_bind_group,
spatial_denoise_bind_group,
});
}
}
#[allow(clippy::needless_range_loop)]
fn generate_hilbert_index_lut() -> [[u16; 64]; 64] {
let mut t = [[0; 64]; 64];
for x in 0..64 {
for y in 0..64 {
t[x][y] = hilbert_index(x as u16, y as u16);
}
}
t
}
// https://www.shadertoy.com/view/3tB3z3
const HILBERT_WIDTH: u16 = 64;
fn hilbert_index(mut x: u16, mut y: u16) -> u16 {
let mut index = 0;
let mut level: u16 = HILBERT_WIDTH / 2;
while level > 0 {
let region_x = (x & level > 0) as u16;
let region_y = (y & level > 0) as u16;
index += level * level * ((3 * region_x) ^ region_y);
if region_y == 0 {
if region_x == 1 {
x = HILBERT_WIDTH - 1 - x;
y = HILBERT_WIDTH - 1 - y;
}
mem::swap(&mut x, &mut y);
}
level /= 2;
}
index
}
/// Divide `numerator` by `denominator`, rounded up to the nearest multiple of `denominator`.
fn div_ceil(numerator: u32, denominator: u32) -> u32 {
(numerator + denominator - 1) / denominator
}

View file

@ -0,0 +1,101 @@
// Inputs a depth texture and outputs a MIP-chain of depths.
//
// Because SSAO's performance is bound by texture reads, this increases
// performance over using the full resolution depth for every sample.
// Reference: https://research.nvidia.com/sites/default/files/pubs/2012-06_Scalable-Ambient-Obscurance/McGuire12SAO.pdf, section 2.2
#import bevy_render::view
@group(0) @binding(0) var input_depth: texture_depth_2d;
@group(0) @binding(1) var preprocessed_depth_mip0: texture_storage_2d<r16float, write>;
@group(0) @binding(2) var preprocessed_depth_mip1: texture_storage_2d<r16float, write>;
@group(0) @binding(3) var preprocessed_depth_mip2: texture_storage_2d<r16float, write>;
@group(0) @binding(4) var preprocessed_depth_mip3: texture_storage_2d<r16float, write>;
@group(0) @binding(5) var preprocessed_depth_mip4: texture_storage_2d<r16float, write>;
@group(1) @binding(0) var point_clamp_sampler: sampler;
@group(1) @binding(1) var<uniform> view: View;
// Using 4 depths from the previous MIP, compute a weighted average for the depth of the current MIP
fn weighted_average(depth0: f32, depth1: f32, depth2: f32, depth3: f32) -> f32 {
let depth_range_scale_factor = 0.75;
let effect_radius = depth_range_scale_factor * 0.5 * 1.457;
let falloff_range = 0.615 * effect_radius;
let falloff_from = effect_radius * (1.0 - 0.615);
let falloff_mul = -1.0 / falloff_range;
let falloff_add = falloff_from / falloff_range + 1.0;
let min_depth = min(min(depth0, depth1), min(depth2, depth3));
let weight0 = saturate((depth0 - min_depth) * falloff_mul + falloff_add);
let weight1 = saturate((depth1 - min_depth) * falloff_mul + falloff_add);
let weight2 = saturate((depth2 - min_depth) * falloff_mul + falloff_add);
let weight3 = saturate((depth3 - min_depth) * falloff_mul + falloff_add);
let weight_total = weight0 + weight1 + weight2 + weight3;
return ((weight0 * depth0) + (weight1 * depth1) + (weight2 * depth2) + (weight3 * depth3)) / weight_total;
}
// Used to share the depths from the previous MIP level between all invocations in a workgroup
var<workgroup> previous_mip_depth: array<array<f32, 8>, 8>;
@compute
@workgroup_size(8, 8, 1)
fn preprocess_depth(@builtin(global_invocation_id) global_id: vec3<u32>, @builtin(local_invocation_id) local_id: vec3<u32>) {
let base_coordinates = vec2<i32>(global_id.xy);
// MIP 0 - Copy 4 texels from the input depth (per invocation, 8x8 invocations per workgroup)
let pixel_coordinates0 = base_coordinates * 2i;
let pixel_coordinates1 = pixel_coordinates0 + vec2<i32>(1i, 0i);
let pixel_coordinates2 = pixel_coordinates0 + vec2<i32>(0i, 1i);
let pixel_coordinates3 = pixel_coordinates0 + vec2<i32>(1i, 1i);
let depths_uv = vec2<f32>(pixel_coordinates0) / view.viewport.zw;
let depths = textureGather(0, input_depth, point_clamp_sampler, depths_uv, vec2<i32>(1i, 1i));
textureStore(preprocessed_depth_mip0, pixel_coordinates0, vec4<f32>(depths.w, 0.0, 0.0, 0.0));
textureStore(preprocessed_depth_mip0, pixel_coordinates1, vec4<f32>(depths.z, 0.0, 0.0, 0.0));
textureStore(preprocessed_depth_mip0, pixel_coordinates2, vec4<f32>(depths.x, 0.0, 0.0, 0.0));
textureStore(preprocessed_depth_mip0, pixel_coordinates3, vec4<f32>(depths.y, 0.0, 0.0, 0.0));
// MIP 1 - Weighted average of MIP 0's depth values (per invocation, 8x8 invocations per workgroup)
let depth_mip1 = weighted_average(depths.w, depths.z, depths.x, depths.y);
textureStore(preprocessed_depth_mip1, base_coordinates, vec4<f32>(depth_mip1, 0.0, 0.0, 0.0));
previous_mip_depth[local_id.x][local_id.y] = depth_mip1;
workgroupBarrier();
// MIP 2 - Weighted average of MIP 1's depth values (per invocation, 4x4 invocations per workgroup)
if all(local_id.xy % vec2<u32>(2u) == vec2<u32>(0u)) {
let depth0 = previous_mip_depth[local_id.x + 0u][local_id.y + 0u];
let depth1 = previous_mip_depth[local_id.x + 1u][local_id.y + 0u];
let depth2 = previous_mip_depth[local_id.x + 0u][local_id.y + 1u];
let depth3 = previous_mip_depth[local_id.x + 1u][local_id.y + 1u];
let depth_mip2 = weighted_average(depth0, depth1, depth2, depth3);
textureStore(preprocessed_depth_mip2, base_coordinates / 2i, vec4<f32>(depth_mip2, 0.0, 0.0, 0.0));
previous_mip_depth[local_id.x][local_id.y] = depth_mip2;
}
workgroupBarrier();
// MIP 3 - Weighted average of MIP 2's depth values (per invocation, 2x2 invocations per workgroup)
if all(local_id.xy % vec2<u32>(4u) == vec2<u32>(0u)) {
let depth0 = previous_mip_depth[local_id.x + 0u][local_id.y + 0u];
let depth1 = previous_mip_depth[local_id.x + 2u][local_id.y + 0u];
let depth2 = previous_mip_depth[local_id.x + 0u][local_id.y + 2u];
let depth3 = previous_mip_depth[local_id.x + 2u][local_id.y + 2u];
let depth_mip3 = weighted_average(depth0, depth1, depth2, depth3);
textureStore(preprocessed_depth_mip3, base_coordinates / 4i, vec4<f32>(depth_mip3, 0.0, 0.0, 0.0));
previous_mip_depth[local_id.x][local_id.y] = depth_mip3;
}
workgroupBarrier();
// MIP 4 - Weighted average of MIP 3's depth values (per invocation, 1 invocation per workgroup)
if all(local_id.xy % vec2<u32>(8u) == vec2<u32>(0u)) {
let depth0 = previous_mip_depth[local_id.x + 0u][local_id.y + 0u];
let depth1 = previous_mip_depth[local_id.x + 4u][local_id.y + 0u];
let depth2 = previous_mip_depth[local_id.x + 0u][local_id.y + 4u];
let depth3 = previous_mip_depth[local_id.x + 4u][local_id.y + 4u];
let depth_mip4 = weighted_average(depth0, depth1, depth2, depth3);
textureStore(preprocessed_depth_mip4, base_coordinates / 8i, vec4<f32>(depth_mip4, 0.0, 0.0, 0.0));
}
}

View file

@ -0,0 +1,92 @@
// 3x3 bilaterial filter (edge-preserving blur)
// https://people.csail.mit.edu/sparis/bf_course/course_notes.pdf
// Note: Does not use the Gaussian kernel part of a typical bilateral blur
// From the paper: "use the information gathered on a neighborhood of 4 × 4 using a bilateral filter for
// reconstruction, using _uniform_ convolution weights"
// Note: The paper does a 4x4 (not quite centered) filter, offset by +/- 1 pixel every other frame
// XeGTAO does a 3x3 filter, on two pixels at a time per compute thread, applied twice
// We do a 3x3 filter, on 1 pixel per compute thread, applied once
#import bevy_render::view
@group(0) @binding(0) var ambient_occlusion_noisy: texture_2d<f32>;
@group(0) @binding(1) var depth_differences: texture_2d<u32>;
@group(0) @binding(2) var ambient_occlusion: texture_storage_2d<r16float, write>;
@group(1) @binding(0) var point_clamp_sampler: sampler;
@group(1) @binding(1) var<uniform> view: View;
@compute
@workgroup_size(8, 8, 1)
fn spatial_denoise(@builtin(global_invocation_id) global_id: vec3<u32>) {
let pixel_coordinates = vec2<i32>(global_id.xy);
let uv = vec2<f32>(pixel_coordinates) / view.viewport.zw;
let edges0 = textureGather(0, depth_differences, point_clamp_sampler, uv);
let edges1 = textureGather(0, depth_differences, point_clamp_sampler, uv, vec2<i32>(2i, 0i));
let edges2 = textureGather(0, depth_differences, point_clamp_sampler, uv, vec2<i32>(1i, 2i));
let visibility0 = textureGather(0, ambient_occlusion_noisy, point_clamp_sampler, uv);
let visibility1 = textureGather(0, ambient_occlusion_noisy, point_clamp_sampler, uv, vec2<i32>(2i, 0i));
let visibility2 = textureGather(0, ambient_occlusion_noisy, point_clamp_sampler, uv, vec2<i32>(0i, 2i));
let visibility3 = textureGather(0, ambient_occlusion_noisy, point_clamp_sampler, uv, vec2<i32>(2i, 2i));
let left_edges = myunpack4x8unorm(edges0.x);
let right_edges = myunpack4x8unorm(edges1.x);
let top_edges = myunpack4x8unorm(edges0.z);
let bottom_edges = myunpack4x8unorm(edges2.w);
var center_edges = myunpack4x8unorm(edges0.y);
center_edges *= vec4<f32>(left_edges.y, right_edges.x, top_edges.w, bottom_edges.z);
let center_weight = 1.2;
let left_weight = center_edges.x;
let right_weight = center_edges.y;
let top_weight = center_edges.z;
let bottom_weight = center_edges.w;
let top_left_weight = 0.425 * (top_weight * top_edges.x + left_weight * left_edges.z);
let top_right_weight = 0.425 * (top_weight * top_edges.y + right_weight * right_edges.z);
let bottom_left_weight = 0.425 * (bottom_weight * bottom_edges.x + left_weight * left_edges.w);
let bottom_right_weight = 0.425 * (bottom_weight * bottom_edges.y + right_weight * right_edges.w);
let center_visibility = visibility0.y;
let left_visibility = visibility0.x;
let right_visibility = visibility0.z;
let top_visibility = visibility1.x;
let bottom_visibility = visibility2.z;
let top_left_visibility = visibility0.w;
let top_right_visibility = visibility1.w;
let bottom_left_visibility = visibility2.w;
let bottom_right_visibility = visibility3.w;
var sum = center_visibility;
sum += left_visibility * left_weight;
sum += right_visibility * right_weight;
sum += top_visibility * top_weight;
sum += bottom_visibility * bottom_weight;
sum += top_left_visibility * top_left_weight;
sum += top_right_visibility * top_right_weight;
sum += bottom_left_visibility * bottom_left_weight;
sum += bottom_right_visibility * bottom_right_weight;
var sum_weight = center_weight;
sum_weight += left_weight;
sum_weight += right_weight;
sum_weight += top_weight;
sum_weight += bottom_weight;
sum_weight += top_left_weight;
sum_weight += top_right_weight;
sum_weight += bottom_left_weight;
sum_weight += bottom_right_weight;
let denoised_visibility = sum / sum_weight;
textureStore(ambient_occlusion, pixel_coordinates, vec4<f32>(denoised_visibility, 0.0, 0.0, 0.0));
}
// TODO: Remove this once https://github.com/gfx-rs/naga/pull/2353 lands in Bevy
fn myunpack4x8unorm(e: u32) -> vec4<f32> {
return vec4<f32>(clamp(f32(e & 0xFFu) / 255.0, 0.0, 1.0),
clamp(f32((e >> 8u) & 0xFFu) / 255.0, 0.0, 1.0),
clamp(f32((e >> 16u) & 0xFFu) / 255.0, 0.0, 1.0),
clamp(f32((e >> 24u) & 0xFFu) / 255.0, 0.0, 1.0));
}

View file

@ -87,7 +87,9 @@ impl Plugin for ViewPlugin {
/// .insert_resource(Msaa::default())
/// .run();
/// ```
#[derive(Resource, Default, Clone, Copy, ExtractResource, Reflect, PartialEq, PartialOrd)]
#[derive(
Resource, Default, Clone, Copy, ExtractResource, Reflect, PartialEq, PartialOrd, Debug,
)]
#[reflect(Resource)]
pub enum Msaa {
Off = 1,

199
examples/3d/ssao.rs Normal file
View file

@ -0,0 +1,199 @@
//! A scene showcasing screen space ambient occlusion.
use bevy::{
core_pipeline::experimental::taa::{TemporalAntiAliasBundle, TemporalAntiAliasPlugin},
pbr::{
ScreenSpaceAmbientOcclusionBundle, ScreenSpaceAmbientOcclusionQualityLevel,
ScreenSpaceAmbientOcclusionSettings,
},
prelude::*,
render::camera::TemporalJitter,
};
use std::f32::consts::PI;
fn main() {
App::new()
.insert_resource(AmbientLight {
brightness: 5.0,
..default()
})
.add_plugins(DefaultPlugins)
.add_plugin(TemporalAntiAliasPlugin)
.add_systems(Startup, setup)
.add_systems(Update, update)
.run();
}
fn setup(
mut commands: Commands,
mut meshes: ResMut<Assets<Mesh>>,
mut materials: ResMut<Assets<StandardMaterial>>,
asset_server: Res<AssetServer>,
) {
commands
.spawn(Camera3dBundle {
camera: Camera {
hdr: true,
..default()
},
transform: Transform::from_xyz(-2.0, 2.0, -2.0).looking_at(Vec3::ZERO, Vec3::Y),
..default()
})
.insert(ScreenSpaceAmbientOcclusionBundle::default())
.insert(TemporalAntiAliasBundle::default());
let material = materials.add(StandardMaterial {
base_color: Color::rgb(0.5, 0.5, 0.5),
perceptual_roughness: 1.0,
reflectance: 0.0,
..default()
});
commands.spawn(PbrBundle {
mesh: meshes.add(Mesh::from(shape::Cube { size: 1.0 })),
material: material.clone(),
transform: Transform::from_xyz(0.0, 0.0, 1.0),
..default()
});
commands.spawn(PbrBundle {
mesh: meshes.add(Mesh::from(shape::Cube { size: 1.0 })),
material: material.clone(),
transform: Transform::from_xyz(0.0, -1.0, 0.0),
..default()
});
commands.spawn(PbrBundle {
mesh: meshes.add(Mesh::from(shape::Cube { size: 1.0 })),
material,
transform: Transform::from_xyz(1.0, 0.0, 0.0),
..default()
});
commands.spawn((
PbrBundle {
mesh: meshes.add(Mesh::from(shape::UVSphere {
radius: 0.4,
sectors: 72,
stacks: 36,
})),
material: materials.add(StandardMaterial {
base_color: Color::rgb(0.4, 0.4, 0.4),
perceptual_roughness: 1.0,
reflectance: 0.0,
..default()
}),
..default()
},
SphereMarker,
));
commands.spawn(DirectionalLightBundle {
directional_light: DirectionalLight {
shadows_enabled: true,
..default()
},
transform: Transform::from_rotation(Quat::from_euler(
EulerRot::ZYX,
0.0,
PI * -0.15,
PI * -0.15,
)),
..default()
});
commands.spawn(
TextBundle::from_section(
"",
TextStyle {
font: asset_server.load("fonts/FiraMono-Medium.ttf"),
font_size: 26.0,
color: Color::BLACK,
},
)
.with_style(Style {
position_type: PositionType::Absolute,
bottom: Val::Px(10.0),
left: Val::Px(10.0),
..default()
}),
);
}
fn update(
camera: Query<
(
Entity,
Option<&ScreenSpaceAmbientOcclusionSettings>,
Option<&TemporalJitter>,
),
With<Camera>,
>,
mut text: Query<&mut Text>,
mut sphere: Query<&mut Transform, With<SphereMarker>>,
mut commands: Commands,
keycode: Res<Input<KeyCode>>,
time: Res<Time>,
) {
let mut sphere = sphere.single_mut();
sphere.translation.y = (time.elapsed_seconds() / 1.7).sin() * 0.7;
let (camera_entity, ssao_settings, temporal_jitter) = camera.single();
let mut commands = commands.entity(camera_entity);
if keycode.just_pressed(KeyCode::Key1) {
commands.remove::<ScreenSpaceAmbientOcclusionSettings>();
}
if keycode.just_pressed(KeyCode::Key2) {
commands.insert(ScreenSpaceAmbientOcclusionSettings {
quality_level: ScreenSpaceAmbientOcclusionQualityLevel::Low,
});
}
if keycode.just_pressed(KeyCode::Key3) {
commands.insert(ScreenSpaceAmbientOcclusionSettings {
quality_level: ScreenSpaceAmbientOcclusionQualityLevel::Medium,
});
}
if keycode.just_pressed(KeyCode::Key4) {
commands.insert(ScreenSpaceAmbientOcclusionSettings {
quality_level: ScreenSpaceAmbientOcclusionQualityLevel::High,
});
}
if keycode.just_pressed(KeyCode::Key5) {
commands.insert(ScreenSpaceAmbientOcclusionSettings {
quality_level: ScreenSpaceAmbientOcclusionQualityLevel::Ultra,
});
}
if keycode.just_pressed(KeyCode::Space) {
if temporal_jitter.is_some() {
commands.remove::<TemporalJitter>();
} else {
commands.insert(TemporalJitter::default());
}
}
let mut text = text.single_mut();
let text = &mut text.sections[0].value;
text.clear();
let (o, l, m, h, u) = match ssao_settings.map(|s| s.quality_level) {
None => ("*", "", "", "", ""),
Some(ScreenSpaceAmbientOcclusionQualityLevel::Low) => ("", "*", "", "", ""),
Some(ScreenSpaceAmbientOcclusionQualityLevel::Medium) => ("", "", "*", "", ""),
Some(ScreenSpaceAmbientOcclusionQualityLevel::High) => ("", "", "", "*", ""),
Some(ScreenSpaceAmbientOcclusionQualityLevel::Ultra) => ("", "", "", "", "*"),
_ => unreachable!(),
};
text.push_str("SSAO Quality:\n");
text.push_str(&format!("(1) {o}Off{o}\n"));
text.push_str(&format!("(2) {l}Low{l}\n"));
text.push_str(&format!("(3) {m}Medium{m}\n"));
text.push_str(&format!("(4) {h}High{h}\n"));
text.push_str(&format!("(5) {u}Ultra{u}\n\n"));
text.push_str("Temporal Antialiasing:\n");
text.push_str(match temporal_jitter {
Some(_) => "(Space) Enabled",
None => "(Space) Disabled",
});
}
#[derive(Component)]
struct SphereMarker;

View file

@ -128,6 +128,7 @@ Example | Description
[Parenting](../examples/3d/parenting.rs) | Demonstrates parent->child relationships and relative transformations
[Physically Based Rendering](../examples/3d/pbr.rs) | Demonstrates use of Physically Based Rendering (PBR) properties
[Render to Texture](../examples/3d/render_to_texture.rs) | Shows how to render to a texture, useful for mirrors, UI, or exporting images
[Screen Space Ambient Occlusion](../examples/3d/ssao.rs) | A scene showcasing screen space ambient occlusion
[Shadow Biases](../examples/3d/shadow_biases.rs) | Demonstrates how shadow biases affect shadows in a 3d scene
[Shadow Caster and Receiver](../examples/3d/shadow_caster_receiver.rs) | Demonstrates how to prevent meshes from casting/receiving shadows in a 3d scene
[Skybox](../examples/3d/skybox.rs) | Load a cubemap texture onto a cube like a skybox and cycle through different compressed texture formats.