Allow volumetric fog to be localized to specific, optionally voxelized, regions. (#14099)

Currently, volumetric fog is global and affects the entire scene
uniformly. This is inadequate for many use cases, such as local smoke
effects. To address this problem, this commit introduces *fog volumes*,
which are axis-aligned bounding boxes (AABBs) that specify fog
parameters inside their boundaries. Such volumes can also specify a
*density texture*, a 3D texture of voxels that specifies the density of
the fog at each point.

To create a fog volume, add a `FogVolume` component to an entity (which
is included in the new `FogVolumeBundle` convenience bundle). Like light
probes, a fog volume is conceptually a 1×1×1 cube centered on the
origin; a transform can be used to position and resize this region. Many
of the fields on the existing `VolumetricFogSettings` have migrated to
the new `FogVolume` component. `VolumetricFogSettings` on a camera is
still needed to enable volumetric fog. However, by itself
`VolumetricFogSettings` is no longer sufficient to enable volumetric
fog; a `FogVolume` must be present. Applications that wish to retain the
old global fog behavior can simply surround the scene with a large fog
volume.

By way of implementation, this commit converts the volumetric fog shader
from a full-screen shader to one applied to a mesh. The strategy is
different depending on whether the camera is inside or outside the fog
volume. If the camera is inside the fog volume, the mesh is simply a
plane scaled to the viewport, effectively falling back to a full-screen
pass. If the camera is outside the fog volume, the mesh is a cube
transformed to coincide with the boundaries of the fog volume's AABB.
Importantly, in the latter case, only the front faces of the cuboid are
rendered. Instead of treating the boundaries of the fog as a sphere
centered on the camera position, as we did prior to this patch, we
raytrace the far planes of the AABB to determine the portion of each ray
contained within the fog volume. We then raymarch in shadow map space as
usual. If a density texture is present, we modulate the fixed density
value with the trilinearly-interpolated value from that texture.

Furthermore, this patch introduces optional jitter to fog volumes,
intended for use with TAA. This modifies the position of the ray from
frame to frame using interleaved gradient noise, in order to reduce
aliasing artifacts. Many implementations of volumetric fog in games use
this technique. Note that this patch makes no attempt to write a motion
vector; this is because when a view ray intersects multiple voxels
there's no single direction of motion. Consequently, fog volumes can
have ghosting artifacts, but because fog is "ghostly" by its nature,
these artifacts are less objectionable than they would be for opaque
objects.

A new example, `fog_volumes`, has been added. It demonstrates a single
fog volume containing a voxelized representation of the Stanford bunny.
The existing `volumetric_fog` example has been updated to use the new
local volumetrics API.

## Changelog

### Added

* Local `FogVolume`s are now supported, to localize fog to specific
regions. They can optionally have 3D density voxel textures for precise
control over the distribution of the fog.

### Changed

* `VolumetricFogSettings` on a camera no longer enables volumetric fog;
instead, it simply enables the processing of `FogVolume`s within the
scene.

## Migration Guide

* A `FogVolume` is now necessary in order to enable volumetric fog, in
addition to `VolumetricFogSettings` on the camera. Existing uses of
volumetric fog can be migrated by placing a large `FogVolume`
surrounding the scene.

---------

Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
Co-authored-by: François Mockers <mockersf@gmail.com>
This commit is contained in:
Patrick Walton 2024-07-15 20:14:12 -07:00 committed by GitHub
parent ee15be8549
commit 20c6bcdba4
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
9 changed files with 1162 additions and 525 deletions

View file

@ -3262,6 +3262,17 @@ description = "Demonstrates how to enqueue custom draw commands in a render phas
category = "Shaders"
wasm = true
[[example]]
name = "fog_volumes"
path = "examples/3d/fog_volumes.rs"
doc-scrape-examples = true
[package.metadata.example.fog_volumes]
name = "Fog volumes"
description = "Demonstrates fog volumes"
category = "3D Rendering"
wasm = false
[[example]]
name = "physics_in_fixed_timestep"
path = "examples/movement/physics_in_fixed_timestep.rs"

BIN
assets/volumes/bunny.ktx2 Normal file

Binary file not shown.

View file

@ -57,7 +57,9 @@ pub use prepass::*;
pub use render::*;
pub use ssao::*;
pub use ssr::*;
pub use volumetric_fog::*;
pub use volumetric_fog::{
FogVolume, FogVolumeBundle, VolumetricFogPlugin, VolumetricFogSettings, VolumetricLight,
};
pub mod prelude {
#[doc(hidden)]

View file

@ -30,56 +30,38 @@
//! [Henyey-Greenstein phase function]: https://www.pbr-book.org/4ed/Volume_Scattering/Phase_Functions#TheHenyeyndashGreensteinPhaseFunction
use bevy_app::{App, Plugin};
use bevy_asset::{load_internal_asset, Handle};
use bevy_color::{Color, ColorToComponents};
use bevy_core_pipeline::{
core_3d::{
graph::{Core3d, Node3d},
prepare_core_3d_depth_textures, Camera3d,
},
fullscreen_vertex_shader::fullscreen_shader_vertex_state,
prepass::{DeferredPrepass, DepthPrepass, MotionVectorPrepass, NormalPrepass},
use bevy_asset::{load_internal_asset, Assets, Handle};
use bevy_color::Color;
use bevy_core_pipeline::core_3d::{
graph::{Core3d, Node3d},
prepare_core_3d_depth_textures,
};
use bevy_derive::{Deref, DerefMut};
use bevy_ecs::{
component::Component,
entity::Entity,
query::{Has, QueryItem, With},
reflect::ReflectComponent,
bundle::Bundle, component::Component, reflect::ReflectComponent,
schedule::IntoSystemConfigs as _,
system::{lifetimeless::Read, Commands, Query, Res, ResMut, Resource},
world::{FromWorld, World},
};
use bevy_math::Vec3;
use bevy_math::{
primitives::{Cuboid, Plane3d},
Vec2, Vec3,
};
use bevy_reflect::Reflect;
use bevy_render::{
render_graph::{NodeRunError, RenderGraphApp, RenderGraphContext, ViewNode, ViewNodeRunner},
render_resource::{
binding_types::{
sampler, texture_2d, texture_depth_2d, texture_depth_2d_multisampled, uniform_buffer,
},
BindGroupEntries, BindGroupLayout, BindGroupLayoutEntries, CachedRenderPipelineId,
ColorTargetState, ColorWrites, DynamicUniformBuffer, FilterMode, FragmentState,
MultisampleState, Operations, PipelineCache, PrimitiveState, RenderPassColorAttachment,
RenderPassDescriptor, RenderPipelineDescriptor, Sampler, SamplerBindingType,
SamplerDescriptor, Shader, ShaderStages, ShaderType, SpecializedRenderPipeline,
SpecializedRenderPipelines, TextureFormat, TextureSampleType, TextureUsages,
},
renderer::{RenderContext, RenderDevice, RenderQueue},
texture::BevyDefault,
view::{ExtractedView, Msaa, ViewDepthTexture, ViewTarget, ViewUniformOffset},
Extract, ExtractSchedule, Render, RenderApp, RenderSet,
mesh::{Mesh, Meshable},
render_graph::{RenderGraphApp, ViewNodeRunner},
render_resource::{Shader, SpecializedRenderPipelines},
texture::Image,
view::{InheritedVisibility, ViewVisibility, Visibility},
ExtractSchedule, Render, RenderApp, RenderSet,
};
use bevy_utils::prelude::default;
use crate::{
graph::NodePbr, MeshPipelineViewLayoutKey, MeshPipelineViewLayouts, MeshViewBindGroup,
ViewFogUniformOffset, ViewLightProbesUniformOffset, ViewLightsUniformOffset,
ViewScreenSpaceReflectionsUniformOffset,
use bevy_transform::components::{GlobalTransform, Transform};
use render::{
VolumetricFogNode, VolumetricFogPipeline, VolumetricFogUniformBuffer, CUBE_MESH, PLANE_MESH,
VOLUMETRIC_FOG_HANDLE,
};
/// The volumetric fog shader.
pub const VOLUMETRIC_FOG_HANDLE: Handle<Shader> = Handle::weak_from_u128(17400058287583986650);
use crate::graph::NodePbr;
pub mod render;
/// A plugin that implements volumetric fog.
pub struct VolumetricFogPlugin;
@ -92,19 +74,12 @@ pub struct VolumetricFogPlugin;
#[reflect(Component)]
pub struct VolumetricLight;
/// When placed on a [`Camera3d`], enables volumetric fog and volumetric
/// lighting, also known as light shafts or god rays.
/// When placed on a [`bevy_core_pipeline::core_3d::Camera3d`], enables
/// volumetric fog and volumetric lighting, also known as light shafts or god
/// rays.
#[derive(Clone, Copy, Component, Debug, Reflect)]
#[reflect(Component)]
pub struct VolumetricFogSettings {
/// The color of the fog.
///
/// Note that the fog must be lit by a [`VolumetricLight`] or ambient light
/// in order for this color to appear.
///
/// Defaults to white.
pub fog_color: Color,
/// Color of the ambient light.
///
/// This is separate from Bevy's [`AmbientLight`](crate::light::AmbientLight) because an
@ -124,6 +99,13 @@ pub struct VolumetricFogSettings {
/// Defaults to 0.1.
pub ambient_intensity: f32,
/// The maximum distance to offset the ray origin randomly by, in meters.
///
/// This is intended for use with temporal antialiasing. It helps fog look
/// less blocky by varying the start position of the ray, using interleaved
/// gradient noise.
pub jitter: f32,
/// The number of raymarching steps to perform.
///
/// Higher values produce higher-quality results with less banding, but
@ -131,16 +113,44 @@ pub struct VolumetricFogSettings {
///
/// The default value is 64.
pub step_count: u32,
}
/// The maximum distance that Bevy will trace a ray for, in world space.
/// A convenient [`Bundle`] that contains all components necessary to generate a
/// fog volume.
#[derive(Bundle, Clone, Debug, Default)]
pub struct FogVolumeBundle {
/// The actual fog volume.
pub fog_volume: FogVolume,
/// Visibility.
pub visibility: Visibility,
/// Inherited visibility.
pub inherited_visibility: InheritedVisibility,
/// View visibility.
pub view_visibility: ViewVisibility,
/// The local transform. Set this to change the position, and scale of the
/// fog's axis-aligned bounding box (AABB).
pub transform: Transform,
/// The global transform.
pub global_transform: GlobalTransform,
}
#[derive(Clone, Component, Debug, Reflect)]
#[reflect(Component)]
pub struct FogVolume {
/// The color of the fog.
///
/// You can think of this as the radius of a sphere of fog surrounding the
/// camera. It has to be capped to a finite value or else there would be an
/// infinite amount of fog, which would result in completely-opaque areas
/// where the skybox would be.
/// Note that the fog must be lit by a [`VolumetricLight`] or ambient light
/// in order for this color to appear.
///
/// The default value is 25.
pub max_depth: f32,
/// Defaults to white.
pub fog_color: Color,
/// The density of fog, which measures how dark the fog is.
///
/// The default value is 0.1.
pub density_factor: f32,
pub density_texture: Option<Handle<Image>>,
/// The absorption coefficient, which measures what fraction of light is
/// absorbed by the fog at each step.
@ -156,12 +166,8 @@ pub struct VolumetricFogSettings {
/// The default value is 0.3.
pub scattering: f32,
/// The density of fog, which measures how dark the fog is.
///
/// The default value is 0.1.
pub density: f32,
/// Measures the fraction of light that's scattered *toward* the camera, as opposed to *away* from the camera.
/// Measures the fraction of light that's scattered *toward* the camera, as
/// opposed to *away* from the camera.
///
/// Increasing this value makes light shafts become more prominent when the
/// camera is facing toward their source and less prominent when the camera
@ -187,61 +193,6 @@ pub struct VolumetricFogSettings {
pub light_intensity: f32,
}
/// The GPU pipeline for the volumetric fog postprocessing effect.
#[derive(Resource)]
pub struct VolumetricFogPipeline {
/// A reference to the shared set of mesh pipeline view layouts.
mesh_view_layouts: MeshPipelineViewLayouts,
/// The view bind group when multisample antialiasing isn't in use.
volumetric_view_bind_group_layout_no_msaa: BindGroupLayout,
/// The view bind group when multisample antialiasing is in use.
volumetric_view_bind_group_layout_msaa: BindGroupLayout,
/// The sampler that we use to sample the postprocessing input.
color_sampler: Sampler,
}
#[derive(Component, Deref, DerefMut)]
pub struct ViewVolumetricFogPipeline(pub CachedRenderPipelineId);
/// The node in the render graph, part of the postprocessing stack, that
/// implements volumetric fog.
#[derive(Default)]
pub struct VolumetricFogNode;
/// Identifies a single specialization of the volumetric fog shader.
#[derive(PartialEq, Eq, Hash, Clone, Copy)]
pub struct VolumetricFogPipelineKey {
/// The layout of the view, which is needed for the raymarching.
mesh_pipeline_view_key: MeshPipelineViewLayoutKey,
/// Whether the view has high dynamic range.
hdr: bool,
}
/// The same as [`VolumetricFogSettings`], but formatted for the GPU.
#[derive(ShaderType)]
pub struct VolumetricFogUniform {
fog_color: Vec3,
light_tint: Vec3,
ambient_color: Vec3,
ambient_intensity: f32,
step_count: u32,
max_depth: f32,
absorption: f32,
scattering: f32,
density: f32,
scattering_asymmetry: f32,
light_intensity: f32,
}
/// Specifies the offset within the [`VolumetricFogUniformBuffer`] of the
/// [`VolumetricFogUniform`] for a specific view.
#[derive(Component, Deref, DerefMut)]
pub struct ViewVolumetricFogUniformOffset(u32);
/// The GPU buffer that stores the [`VolumetricFogUniform`] data.
#[derive(Resource, Default, Deref, DerefMut)]
pub struct VolumetricFogUniformBuffer(pub DynamicUniformBuffer<VolumetricFogUniform>);
impl Plugin for VolumetricFogPlugin {
fn build(&self, app: &mut App) {
load_internal_asset!(
@ -250,6 +201,11 @@ impl Plugin for VolumetricFogPlugin {
"volumetric_fog.wgsl",
Shader::from_wgsl
);
let mut meshes = app.world_mut().resource_mut::<Assets<Mesh>>();
meshes.insert(&PLANE_MESH, Plane3d::new(Vec3::Z, Vec2::ONE).mesh().into());
meshes.insert(&CUBE_MESH, Cuboid::new(1.0, 1.0, 1.0).mesh().into());
app.register_type::<VolumetricFogSettings>()
.register_type::<VolumetricLight>();
@ -260,13 +216,13 @@ impl Plugin for VolumetricFogPlugin {
render_app
.init_resource::<SpecializedRenderPipelines<VolumetricFogPipeline>>()
.init_resource::<VolumetricFogUniformBuffer>()
.add_systems(ExtractSchedule, extract_volumetric_fog)
.add_systems(ExtractSchedule, render::extract_volumetric_fog)
.add_systems(
Render,
(
prepare_volumetric_fog_pipelines.in_set(RenderSet::Prepare),
prepare_volumetric_fog_uniforms.in_set(RenderSet::Prepare),
prepare_view_depth_textures_for_volumetric_fog
render::prepare_volumetric_fog_pipelines.in_set(RenderSet::Prepare),
render::prepare_volumetric_fog_uniforms.in_set(RenderSet::Prepare),
render::prepare_view_depth_textures_for_volumetric_fog
.in_set(RenderSet::Prepare)
.before(prepare_core_3d_depth_textures),
),
@ -297,353 +253,25 @@ impl Default for VolumetricFogSettings {
fn default() -> Self {
Self {
step_count: 64,
max_depth: 25.0,
absorption: 0.3,
scattering: 0.3,
density: 0.1,
scattering_asymmetry: 0.5,
fog_color: Color::WHITE,
// Matches `AmbientLight` defaults.
ambient_color: Color::WHITE,
ambient_intensity: 0.1,
jitter: 0.0,
}
}
}
impl Default for FogVolume {
fn default() -> Self {
Self {
absorption: 0.3,
scattering: 0.3,
density_factor: 0.1,
density_texture: None,
scattering_asymmetry: 0.5,
fog_color: Color::WHITE,
light_tint: Color::WHITE,
light_intensity: 1.0,
}
}
}
impl FromWorld for VolumetricFogPipeline {
fn from_world(world: &mut World) -> Self {
let render_device = world.resource::<RenderDevice>();
let mesh_view_layouts = world.resource::<MeshPipelineViewLayouts>();
// Create the bind group layout entries common to both the MSAA and
// non-MSAA bind group layouts.
let base_bind_group_layout_entries = &*BindGroupLayoutEntries::sequential(
ShaderStages::FRAGMENT,
(
// `volumetric_fog`
uniform_buffer::<VolumetricFogUniform>(true),
// `color_texture`
texture_2d(TextureSampleType::Float { filterable: true }),
// `color_sampler`
sampler(SamplerBindingType::Filtering),
),
);
// Because `texture_depth_2d` and `texture_depth_2d_multisampled` are
// different types, we need to make separate bind group layouts for
// each.
let mut bind_group_layout_entries_no_msaa = base_bind_group_layout_entries.to_vec();
bind_group_layout_entries_no_msaa.extend_from_slice(&BindGroupLayoutEntries::with_indices(
ShaderStages::FRAGMENT,
((3, texture_depth_2d()),),
));
let volumetric_view_bind_group_layout_no_msaa = render_device.create_bind_group_layout(
"volumetric lighting view bind group layout",
&bind_group_layout_entries_no_msaa,
);
let mut bind_group_layout_entries_msaa = base_bind_group_layout_entries.to_vec();
bind_group_layout_entries_msaa.extend_from_slice(&BindGroupLayoutEntries::with_indices(
ShaderStages::FRAGMENT,
((3, texture_depth_2d_multisampled()),),
));
let volumetric_view_bind_group_layout_msaa = render_device.create_bind_group_layout(
"volumetric lighting view bind group layout (multisampled)",
&bind_group_layout_entries_msaa,
);
let color_sampler = render_device.create_sampler(&SamplerDescriptor {
label: Some("volumetric lighting color sampler"),
mag_filter: FilterMode::Linear,
min_filter: FilterMode::Linear,
compare: None,
..default()
});
VolumetricFogPipeline {
mesh_view_layouts: mesh_view_layouts.clone(),
volumetric_view_bind_group_layout_no_msaa,
volumetric_view_bind_group_layout_msaa,
color_sampler,
}
}
}
/// Extracts [`VolumetricFogSettings`] and [`VolumetricLight`]s from the main
/// world to the render world.
pub fn extract_volumetric_fog(
mut commands: Commands,
view_targets: Extract<Query<(Entity, &VolumetricFogSettings)>>,
volumetric_lights: Extract<Query<(Entity, &VolumetricLight)>>,
) {
if volumetric_lights.is_empty() {
return;
}
for (view_target, volumetric_fog_settings) in view_targets.iter() {
commands
.get_or_spawn(view_target)
.insert(*volumetric_fog_settings);
}
for (entity, volumetric_light) in volumetric_lights.iter() {
commands.get_or_spawn(entity).insert(*volumetric_light);
}
}
impl ViewNode for VolumetricFogNode {
type ViewQuery = (
Read<ViewTarget>,
Read<ViewDepthTexture>,
Read<ViewVolumetricFogPipeline>,
Read<ViewUniformOffset>,
Read<ViewLightsUniformOffset>,
Read<ViewFogUniformOffset>,
Read<ViewLightProbesUniformOffset>,
Read<ViewVolumetricFogUniformOffset>,
Read<MeshViewBindGroup>,
Read<ViewScreenSpaceReflectionsUniformOffset>,
);
fn run<'w>(
&self,
_: &mut RenderGraphContext,
render_context: &mut RenderContext<'w>,
(
view_target,
view_depth_texture,
view_volumetric_lighting_pipeline,
view_uniform_offset,
view_lights_offset,
view_fog_offset,
view_light_probes_offset,
view_volumetric_lighting_uniform_buffer_offset,
view_bind_group,
view_ssr_offset,
): QueryItem<'w, Self::ViewQuery>,
world: &'w World,
) -> Result<(), NodeRunError> {
let pipeline_cache = world.resource::<PipelineCache>();
let volumetric_lighting_pipeline = world.resource::<VolumetricFogPipeline>();
let volumetric_lighting_uniform_buffer = world.resource::<VolumetricFogUniformBuffer>();
let msaa = world.resource::<Msaa>();
// Fetch the uniform buffer and binding.
let (Some(pipeline), Some(volumetric_lighting_uniform_buffer_binding)) = (
pipeline_cache.get_render_pipeline(**view_volumetric_lighting_pipeline),
volumetric_lighting_uniform_buffer.binding(),
) else {
return Ok(());
};
let postprocess = view_target.post_process_write();
// Create the bind group for the view.
//
// TODO: Cache this.
let volumetric_view_bind_group_layout = match *msaa {
Msaa::Off => &volumetric_lighting_pipeline.volumetric_view_bind_group_layout_no_msaa,
_ => &volumetric_lighting_pipeline.volumetric_view_bind_group_layout_msaa,
};
let volumetric_view_bind_group = render_context.render_device().create_bind_group(
None,
volumetric_view_bind_group_layout,
&BindGroupEntries::sequential((
volumetric_lighting_uniform_buffer_binding,
postprocess.source,
&volumetric_lighting_pipeline.color_sampler,
view_depth_texture.view(),
)),
);
let render_pass_descriptor = RenderPassDescriptor {
label: Some("volumetric lighting pass"),
color_attachments: &[Some(RenderPassColorAttachment {
view: postprocess.destination,
resolve_target: None,
ops: Operations::default(),
})],
depth_stencil_attachment: None,
timestamp_writes: None,
occlusion_query_set: None,
};
let mut render_pass = render_context
.command_encoder()
.begin_render_pass(&render_pass_descriptor);
render_pass.set_pipeline(pipeline);
render_pass.set_bind_group(
0,
&view_bind_group.value,
&[
view_uniform_offset.offset,
view_lights_offset.offset,
view_fog_offset.offset,
**view_light_probes_offset,
**view_ssr_offset,
],
);
render_pass.set_bind_group(
1,
&volumetric_view_bind_group,
&[**view_volumetric_lighting_uniform_buffer_offset],
);
render_pass.draw(0..3, 0..1);
Ok(())
}
}
impl SpecializedRenderPipeline for VolumetricFogPipeline {
type Key = VolumetricFogPipelineKey;
fn specialize(&self, key: Self::Key) -> RenderPipelineDescriptor {
let mesh_view_layout = self
.mesh_view_layouts
.get_view_layout(key.mesh_pipeline_view_key);
// We always use hardware 2x2 filtering for sampling the shadow map; the
// more accurate versions with percentage-closer filtering aren't worth
// the overhead.
let mut shader_defs = vec!["SHADOW_FILTER_METHOD_HARDWARE_2X2".into()];
// We need a separate layout for MSAA and non-MSAA.
let volumetric_view_bind_group_layout = if key
.mesh_pipeline_view_key
.contains(MeshPipelineViewLayoutKey::MULTISAMPLED)
{
shader_defs.push("MULTISAMPLED".into());
self.volumetric_view_bind_group_layout_msaa.clone()
} else {
self.volumetric_view_bind_group_layout_no_msaa.clone()
};
RenderPipelineDescriptor {
label: Some("volumetric lighting pipeline".into()),
layout: vec![mesh_view_layout.clone(), volumetric_view_bind_group_layout],
push_constant_ranges: vec![],
vertex: fullscreen_shader_vertex_state(),
primitive: PrimitiveState::default(),
depth_stencil: None,
multisample: MultisampleState::default(),
fragment: Some(FragmentState {
shader: VOLUMETRIC_FOG_HANDLE,
shader_defs,
entry_point: "fragment".into(),
targets: vec![Some(ColorTargetState {
format: if key.hdr {
ViewTarget::TEXTURE_FORMAT_HDR
} else {
TextureFormat::bevy_default()
},
blend: None,
write_mask: ColorWrites::ALL,
})],
}),
}
}
}
/// Specializes volumetric fog pipelines for all views with that effect enabled.
pub fn prepare_volumetric_fog_pipelines(
mut commands: Commands,
pipeline_cache: Res<PipelineCache>,
mut pipelines: ResMut<SpecializedRenderPipelines<VolumetricFogPipeline>>,
volumetric_lighting_pipeline: Res<VolumetricFogPipeline>,
view_targets: Query<
(
Entity,
&ExtractedView,
Has<NormalPrepass>,
Has<DepthPrepass>,
Has<MotionVectorPrepass>,
Has<DeferredPrepass>,
),
With<VolumetricFogSettings>,
>,
msaa: Res<Msaa>,
) {
for (entity, view, normal_prepass, depth_prepass, motion_vector_prepass, deferred_prepass) in
view_targets.iter()
{
// Create a mesh pipeline view layout key corresponding to the view.
let mut mesh_pipeline_view_key = MeshPipelineViewLayoutKey::from(*msaa);
mesh_pipeline_view_key.set(MeshPipelineViewLayoutKey::NORMAL_PREPASS, normal_prepass);
mesh_pipeline_view_key.set(MeshPipelineViewLayoutKey::DEPTH_PREPASS, depth_prepass);
mesh_pipeline_view_key.set(
MeshPipelineViewLayoutKey::MOTION_VECTOR_PREPASS,
motion_vector_prepass,
);
mesh_pipeline_view_key.set(
MeshPipelineViewLayoutKey::DEFERRED_PREPASS,
deferred_prepass,
);
// Specialize the pipeline.
let pipeline_id = pipelines.specialize(
&pipeline_cache,
&volumetric_lighting_pipeline,
VolumetricFogPipelineKey {
mesh_pipeline_view_key,
hdr: view.hdr,
},
);
commands
.entity(entity)
.insert(ViewVolumetricFogPipeline(pipeline_id));
}
}
/// A system that converts [`VolumetricFogSettings`]
pub fn prepare_volumetric_fog_uniforms(
mut commands: Commands,
mut volumetric_lighting_uniform_buffer: ResMut<VolumetricFogUniformBuffer>,
view_targets: Query<(Entity, &VolumetricFogSettings)>,
render_device: Res<RenderDevice>,
render_queue: Res<RenderQueue>,
) {
let Some(mut writer) = volumetric_lighting_uniform_buffer.get_writer(
view_targets.iter().len(),
&render_device,
&render_queue,
) else {
return;
};
for (entity, volumetric_fog_settings) in view_targets.iter() {
let offset = writer.write(&VolumetricFogUniform {
fog_color: volumetric_fog_settings.fog_color.to_linear().to_vec3(),
light_tint: volumetric_fog_settings.light_tint.to_linear().to_vec3(),
ambient_color: volumetric_fog_settings.ambient_color.to_linear().to_vec3(),
ambient_intensity: volumetric_fog_settings.ambient_intensity,
step_count: volumetric_fog_settings.step_count,
max_depth: volumetric_fog_settings.max_depth,
absorption: volumetric_fog_settings.absorption,
scattering: volumetric_fog_settings.scattering,
density: volumetric_fog_settings.density,
scattering_asymmetry: volumetric_fog_settings.scattering_asymmetry,
light_intensity: volumetric_fog_settings.light_intensity,
});
commands
.entity(entity)
.insert(ViewVolumetricFogUniformOffset(offset));
}
}
/// A system that marks all view depth textures as readable in shaders.
///
/// The volumetric lighting pass needs to do this, and it doesn't happen by
/// default.
pub fn prepare_view_depth_textures_for_volumetric_fog(
mut view_targets: Query<&mut Camera3d, With<VolumetricFogSettings>>,
) {
for mut camera in view_targets.iter_mut() {
camera.depth_texture_usages.0 |= TextureUsages::TEXTURE_BINDING.bits();
}
}

View file

@ -0,0 +1,822 @@
//! Rendering of fog volumes.
use std::array;
use bevy_asset::{AssetId, Handle};
use bevy_color::ColorToComponents as _;
use bevy_core_pipeline::{
core_3d::Camera3d,
prepass::{DeferredPrepass, DepthPrepass, MotionVectorPrepass, NormalPrepass},
};
use bevy_derive::{Deref, DerefMut};
use bevy_ecs::{
component::Component,
entity::Entity,
query::{Has, QueryItem, With},
system::{lifetimeless::Read, Commands, Local, Query, Res, ResMut, Resource},
world::{FromWorld, World},
};
use bevy_math::{vec4, Mat3A, Mat4, Vec3, Vec3A, Vec4, Vec4Swizzles as _};
use bevy_render::{
mesh::{GpuBufferInfo, GpuMesh, Mesh, MeshVertexBufferLayoutRef},
render_asset::RenderAssets,
render_graph::{NodeRunError, RenderGraphContext, ViewNode},
render_resource::{
binding_types::{
sampler, texture_3d, texture_depth_2d, texture_depth_2d_multisampled, uniform_buffer,
},
BindGroupLayout, BindGroupLayoutEntries, BindingResource, BlendComponent, BlendFactor,
BlendOperation, BlendState, CachedRenderPipelineId, ColorTargetState, ColorWrites,
DynamicBindGroupEntries, DynamicUniformBuffer, Face, FragmentState, LoadOp,
MultisampleState, Operations, PipelineCache, PrimitiveState, RenderPassColorAttachment,
RenderPassDescriptor, RenderPipelineDescriptor, SamplerBindingType, Shader, ShaderStages,
ShaderType, SpecializedRenderPipeline, SpecializedRenderPipelines, StoreOp, TextureFormat,
TextureSampleType, TextureUsages, VertexState,
},
renderer::{RenderContext, RenderDevice, RenderQueue},
texture::{BevyDefault as _, GpuImage, Image},
view::{ExtractedView, Msaa, ViewDepthTexture, ViewTarget, ViewUniformOffset},
Extract,
};
use bevy_transform::components::GlobalTransform;
use bevy_utils::prelude::default;
use bitflags::bitflags;
use crate::{
FogVolume, MeshPipelineViewLayoutKey, MeshPipelineViewLayouts, MeshViewBindGroup,
ViewFogUniformOffset, ViewLightProbesUniformOffset, ViewLightsUniformOffset,
ViewScreenSpaceReflectionsUniformOffset, VolumetricFogSettings, VolumetricLight,
};
bitflags! {
/// Flags that describe the bind group layout used to render volumetric fog.
#[derive(Clone, Copy, PartialEq)]
struct VolumetricFogBindGroupLayoutKey: u8 {
/// The framebuffer is multisampled.
const MULTISAMPLED = 0x1;
/// The volumetric fog has a 3D voxel density texture.
const DENSITY_TEXTURE = 0x2;
}
}
bitflags! {
/// Flags that describe the rasterization pipeline used to render volumetric
/// fog.
#[derive(Clone, Copy, PartialEq, Eq, Hash)]
struct VolumetricFogPipelineKeyFlags: u8 {
/// The view's color format has high dynamic range.
const HDR = 0x1;
/// The volumetric fog has a 3D voxel density texture.
const DENSITY_TEXTURE = 0x2;
}
}
/// The volumetric fog shader.
pub const VOLUMETRIC_FOG_HANDLE: Handle<Shader> = Handle::weak_from_u128(17400058287583986650);
/// The plane mesh, which is used to render a fog volume that the camera is
/// inside.
///
/// This mesh is simply stretched to the size of the framebuffer, as when the
/// camera is inside a fog volume it's essentially a full-screen effect.
pub const PLANE_MESH: Handle<Mesh> = Handle::weak_from_u128(435245126479971076);
/// The cube mesh, which is used to render a fog volume that the camera is
/// outside.
///
/// Note that only the front faces of this cuboid will be rasterized in
/// hardware. The back faces will be calculated in the shader via raytracing.
pub const CUBE_MESH: Handle<Mesh> = Handle::weak_from_u128(5023959819001661507);
/// The total number of bind group layouts.
///
/// This is the total number of combinations of all
/// [`VolumetricFogBindGroupLayoutKey`] flags.
const VOLUMETRIC_FOG_BIND_GROUP_LAYOUT_COUNT: usize =
VolumetricFogBindGroupLayoutKey::all().bits() as usize + 1;
/// A matrix that converts from local 1×1×1 space to UVW 3D density texture
/// space.
static UVW_FROM_LOCAL: Mat4 = Mat4::from_cols(
vec4(1.0, 0.0, 0.0, 0.0),
vec4(0.0, 1.0, 0.0, 0.0),
vec4(0.0, 0.0, 1.0, 0.0),
vec4(0.5, 0.5, 0.5, 1.0),
);
/// The GPU pipeline for the volumetric fog postprocessing effect.
#[derive(Resource)]
pub struct VolumetricFogPipeline {
/// A reference to the shared set of mesh pipeline view layouts.
mesh_view_layouts: MeshPipelineViewLayouts,
/// All bind group layouts.
///
/// Since there aren't too many of these, we precompile them all.
volumetric_view_bind_group_layouts: [BindGroupLayout; VOLUMETRIC_FOG_BIND_GROUP_LAYOUT_COUNT],
}
/// The two render pipelines that we use for fog volumes: one for when a 3D
/// density texture is present and one for when it isn't.
#[derive(Component)]
pub struct ViewVolumetricFogPipelines {
/// The render pipeline that we use when no density texture is present, and
/// the density distribution is uniform.
pub textureless: CachedRenderPipelineId,
/// The render pipeline that we use when a density texture is present.
pub textured: CachedRenderPipelineId,
}
/// The node in the render graph, part of the postprocessing stack, that
/// implements volumetric fog.
#[derive(Default)]
pub struct VolumetricFogNode;
/// Identifies a single specialization of the volumetric fog shader.
#[derive(PartialEq, Eq, Hash, Clone)]
pub struct VolumetricFogPipelineKey {
/// The layout of the view, which is needed for the raymarching.
mesh_pipeline_view_key: MeshPipelineViewLayoutKey,
/// The vertex buffer layout of the primitive.
///
/// Both planes (used when the camera is inside the fog volume) and cubes
/// (used when the camera is outside the fog volume) use identical vertex
/// buffer layouts, so we only need one of them.
vertex_buffer_layout: MeshVertexBufferLayoutRef,
/// Flags that specify features on the pipeline key.
flags: VolumetricFogPipelineKeyFlags,
}
/// The same as [`VolumetricFogSettings`] and [`FogVolume`], but formatted for
/// the GPU.
///
/// See the documentation of those structures for more information on these
/// fields.
#[derive(ShaderType)]
pub struct VolumetricFogUniform {
clip_from_local: Mat4,
/// The transform from world space to 3D density texture UVW space.
uvw_from_world: Mat4,
/// View-space plane equations of the far faces of the fog volume cuboid.
///
/// The vector takes the form V = (N, -N⋅Q), where N is the normal of the
/// plane and Q is any point in it, in view space. The equation of the plane
/// for homogeneous point P = (Px, Py, Pz, Pw) is V⋅P = 0.
far_planes: [Vec4; 3],
fog_color: Vec3,
light_tint: Vec3,
ambient_color: Vec3,
ambient_intensity: f32,
step_count: u32,
/// The radius of a sphere that bounds the fog volume in view space.
bounding_radius: f32,
absorption: f32,
scattering: f32,
density: f32,
scattering_asymmetry: f32,
light_intensity: f32,
jitter_strength: f32,
}
/// Specifies the offset within the [`VolumetricFogUniformBuffer`] of the
/// [`VolumetricFogUniform`] for a specific view.
#[derive(Component, Deref, DerefMut)]
pub struct ViewVolumetricFog(Vec<ViewFogVolume>);
/// Information that the render world needs to maintain about each fog volume.
pub struct ViewFogVolume {
/// The 3D voxel density texture for this volume, if present.
density_texture: Option<AssetId<Image>>,
/// The offset of this view's [`VolumetricFogUniform`] structure within the
/// [`VolumetricFogUniformBuffer`].
uniform_buffer_offset: u32,
/// True if the camera is outside the fog volume; false if it's inside the
/// fog volume.
exterior: bool,
}
/// The GPU buffer that stores the [`VolumetricFogUniform`] data.
#[derive(Resource, Default, Deref, DerefMut)]
pub struct VolumetricFogUniformBuffer(pub DynamicUniformBuffer<VolumetricFogUniform>);
impl FromWorld for VolumetricFogPipeline {
fn from_world(world: &mut World) -> Self {
let render_device = world.resource::<RenderDevice>();
let mesh_view_layouts = world.resource::<MeshPipelineViewLayouts>();
// Create the bind group layout entries common to all bind group
// layouts.
let base_bind_group_layout_entries = &BindGroupLayoutEntries::single(
ShaderStages::VERTEX_FRAGMENT,
// `volumetric_fog`
uniform_buffer::<VolumetricFogUniform>(true),
);
// For every combination of `VolumetricFogBindGroupLayoutKey` bits,
// create a bind group layout.
let bind_group_layouts = array::from_fn(|bits| {
let flags = VolumetricFogBindGroupLayoutKey::from_bits_retain(bits as u8);
let mut bind_group_layout_entries = base_bind_group_layout_entries.to_vec();
// `depth_texture`
bind_group_layout_entries.extend_from_slice(&BindGroupLayoutEntries::with_indices(
ShaderStages::FRAGMENT,
((
1,
if flags.contains(VolumetricFogBindGroupLayoutKey::MULTISAMPLED) {
texture_depth_2d_multisampled()
} else {
texture_depth_2d()
},
),),
));
// `density_texture` and `density_sampler`
if flags.contains(VolumetricFogBindGroupLayoutKey::DENSITY_TEXTURE) {
bind_group_layout_entries.extend_from_slice(&BindGroupLayoutEntries::with_indices(
ShaderStages::FRAGMENT,
(
(2, texture_3d(TextureSampleType::Float { filterable: true })),
(3, sampler(SamplerBindingType::Filtering)),
),
));
}
// Create the bind group layout.
let description = flags.bind_group_layout_description();
render_device.create_bind_group_layout(&*description, &bind_group_layout_entries)
});
VolumetricFogPipeline {
mesh_view_layouts: mesh_view_layouts.clone(),
volumetric_view_bind_group_layouts: bind_group_layouts,
}
}
}
/// Extracts [`VolumetricFogSettings`], [`FogVolume`], and [`VolumetricLight`]s
/// from the main world to the render world.
pub fn extract_volumetric_fog(
mut commands: Commands,
view_targets: Extract<Query<(Entity, &VolumetricFogSettings)>>,
fog_volumes: Extract<Query<(Entity, &FogVolume, &GlobalTransform)>>,
volumetric_lights: Extract<Query<(Entity, &VolumetricLight)>>,
) {
if volumetric_lights.is_empty() {
return;
}
for (entity, volumetric_fog_settings) in view_targets.iter() {
commands
.get_or_spawn(entity)
.insert(*volumetric_fog_settings);
}
for (entity, fog_volume, fog_transform) in fog_volumes.iter() {
commands
.get_or_spawn(entity)
.insert((*fog_volume).clone())
.insert(*fog_transform);
}
for (entity, volumetric_light) in volumetric_lights.iter() {
commands.get_or_spawn(entity).insert(*volumetric_light);
}
}
impl ViewNode for VolumetricFogNode {
type ViewQuery = (
Read<ViewTarget>,
Read<ViewDepthTexture>,
Read<ViewVolumetricFogPipelines>,
Read<ViewUniformOffset>,
Read<ViewLightsUniformOffset>,
Read<ViewFogUniformOffset>,
Read<ViewLightProbesUniformOffset>,
Read<ViewVolumetricFog>,
Read<MeshViewBindGroup>,
Read<ViewScreenSpaceReflectionsUniformOffset>,
);
fn run<'w>(
&self,
_: &mut RenderGraphContext,
render_context: &mut RenderContext<'w>,
(
view_target,
view_depth_texture,
view_volumetric_lighting_pipelines,
view_uniform_offset,
view_lights_offset,
view_fog_offset,
view_light_probes_offset,
view_fog_volumes,
view_bind_group,
view_ssr_offset,
): QueryItem<'w, Self::ViewQuery>,
world: &'w World,
) -> Result<(), NodeRunError> {
let pipeline_cache = world.resource::<PipelineCache>();
let volumetric_lighting_pipeline = world.resource::<VolumetricFogPipeline>();
let volumetric_lighting_uniform_buffers = world.resource::<VolumetricFogUniformBuffer>();
let image_assets = world.resource::<RenderAssets<GpuImage>>();
let msaa = world.resource::<Msaa>();
// Fetch the uniform buffer and binding.
let (
Some(textureless_pipeline),
Some(textured_pipeline),
Some(volumetric_lighting_uniform_buffer_binding),
) = (
pipeline_cache.get_render_pipeline(view_volumetric_lighting_pipelines.textureless),
pipeline_cache.get_render_pipeline(view_volumetric_lighting_pipelines.textured),
volumetric_lighting_uniform_buffers.binding(),
)
else {
return Ok(());
};
let gpu_meshes = world.resource::<RenderAssets<GpuMesh>>();
for view_fog_volume in view_fog_volumes.iter() {
// If the camera is outside the fog volume, pick the cube mesh;
// otherwise, pick the plane mesh. In the latter case we'll be
// effectively rendering a full-screen quad.
let mesh_handle = if view_fog_volume.exterior {
CUBE_MESH.clone()
} else {
PLANE_MESH.clone()
};
let density_image = view_fog_volume
.density_texture
.and_then(|density_texture| image_assets.get(density_texture));
// Pick the right pipeline, depending on whether a density texture
// is present or not.
let pipeline = if density_image.is_some() {
textured_pipeline
} else {
textureless_pipeline
};
// This should always succeed, but if the asset was unloaded don't
// panic.
let Some(gpu_mesh) = gpu_meshes.get(&mesh_handle) else {
return Ok(());
};
// Create the bind group for the view.
//
// TODO: Cache this.
let mut bind_group_layout_key = VolumetricFogBindGroupLayoutKey::empty();
bind_group_layout_key.set(
VolumetricFogBindGroupLayoutKey::MULTISAMPLED,
!matches!(*msaa, Msaa::Off),
);
// Create the bind group entries. The ones relating to the density
// texture will only be filled in if that texture is present.
let mut bind_group_entries = DynamicBindGroupEntries::sequential((
volumetric_lighting_uniform_buffer_binding.clone(),
BindingResource::TextureView(view_depth_texture.view()),
));
if let Some(density_image) = density_image {
bind_group_layout_key.insert(VolumetricFogBindGroupLayoutKey::DENSITY_TEXTURE);
bind_group_entries = bind_group_entries.extend_sequential((
BindingResource::TextureView(&density_image.texture_view),
BindingResource::Sampler(&density_image.sampler),
));
}
let volumetric_view_bind_group_layout = &volumetric_lighting_pipeline
.volumetric_view_bind_group_layouts[bind_group_layout_key.bits() as usize];
let volumetric_view_bind_group = render_context.render_device().create_bind_group(
None,
volumetric_view_bind_group_layout,
&bind_group_entries,
);
let render_pass_descriptor = RenderPassDescriptor {
label: Some("volumetric lighting pass"),
color_attachments: &[Some(RenderPassColorAttachment {
view: view_target.main_texture_view(),
resolve_target: None,
ops: Operations {
load: LoadOp::Load,
store: StoreOp::Store,
},
})],
depth_stencil_attachment: None,
timestamp_writes: None,
occlusion_query_set: None,
};
let mut render_pass = render_context
.command_encoder()
.begin_render_pass(&render_pass_descriptor);
render_pass.set_vertex_buffer(0, *gpu_mesh.vertex_buffer.slice(..));
render_pass.set_pipeline(pipeline);
render_pass.set_bind_group(
0,
&view_bind_group.value,
&[
view_uniform_offset.offset,
view_lights_offset.offset,
view_fog_offset.offset,
**view_light_probes_offset,
**view_ssr_offset,
],
);
render_pass.set_bind_group(
1,
&volumetric_view_bind_group,
&[view_fog_volume.uniform_buffer_offset],
);
// Draw elements or arrays, as appropriate.
match &gpu_mesh.buffer_info {
GpuBufferInfo::Indexed {
buffer,
index_format,
count,
} => {
render_pass.set_index_buffer(*buffer.slice(..), *index_format);
render_pass.draw_indexed(0..*count, 0, 0..1);
}
GpuBufferInfo::NonIndexed => {
render_pass.draw(0..gpu_mesh.vertex_count, 0..1);
}
}
}
Ok(())
}
}
impl SpecializedRenderPipeline for VolumetricFogPipeline {
type Key = VolumetricFogPipelineKey;
fn specialize(&self, key: Self::Key) -> RenderPipelineDescriptor {
let mesh_view_layout = self
.mesh_view_layouts
.get_view_layout(key.mesh_pipeline_view_key);
// We always use hardware 2x2 filtering for sampling the shadow map; the
// more accurate versions with percentage-closer filtering aren't worth
// the overhead.
let mut shader_defs = vec!["SHADOW_FILTER_METHOD_HARDWARE_2X2".into()];
// We need a separate layout for MSAA and non-MSAA, as well as one for
// the presence or absence of the density texture.
let mut bind_group_layout_key = VolumetricFogBindGroupLayoutKey::empty();
bind_group_layout_key.set(
VolumetricFogBindGroupLayoutKey::MULTISAMPLED,
key.mesh_pipeline_view_key
.contains(MeshPipelineViewLayoutKey::MULTISAMPLED),
);
bind_group_layout_key.set(
VolumetricFogBindGroupLayoutKey::DENSITY_TEXTURE,
key.flags
.contains(VolumetricFogPipelineKeyFlags::DENSITY_TEXTURE),
);
let volumetric_view_bind_group_layout =
self.volumetric_view_bind_group_layouts[bind_group_layout_key.bits() as usize].clone();
// Both the cube and plane have the same vertex layout, so we don't need
// to distinguish between the two.
let vertex_format = key
.vertex_buffer_layout
.0
.get_layout(&[Mesh::ATTRIBUTE_POSITION.at_shader_location(0)])
.expect("Failed to get vertex layout for volumetric fog hull");
if key
.mesh_pipeline_view_key
.contains(MeshPipelineViewLayoutKey::MULTISAMPLED)
{
shader_defs.push("MULTISAMPLED".into());
}
if key
.flags
.contains(VolumetricFogPipelineKeyFlags::DENSITY_TEXTURE)
{
shader_defs.push("DENSITY_TEXTURE".into());
}
RenderPipelineDescriptor {
label: Some("volumetric lighting pipeline".into()),
layout: vec![mesh_view_layout.clone(), volumetric_view_bind_group_layout],
push_constant_ranges: vec![],
vertex: VertexState {
shader: VOLUMETRIC_FOG_HANDLE,
shader_defs: shader_defs.clone(),
entry_point: "vertex".into(),
buffers: vec![vertex_format],
},
primitive: PrimitiveState {
cull_mode: Some(Face::Back),
..default()
},
depth_stencil: None,
multisample: MultisampleState::default(),
fragment: Some(FragmentState {
shader: VOLUMETRIC_FOG_HANDLE,
shader_defs,
entry_point: "fragment".into(),
targets: vec![Some(ColorTargetState {
format: if key.flags.contains(VolumetricFogPipelineKeyFlags::HDR) {
ViewTarget::TEXTURE_FORMAT_HDR
} else {
TextureFormat::bevy_default()
},
// Blend on top of what's already in the framebuffer. Doing
// the alpha blending with the hardware blender allows us to
// avoid having to use intermediate render targets.
blend: Some(BlendState {
color: BlendComponent {
src_factor: BlendFactor::One,
dst_factor: BlendFactor::OneMinusSrcAlpha,
operation: BlendOperation::Add,
},
alpha: BlendComponent {
src_factor: BlendFactor::Zero,
dst_factor: BlendFactor::One,
operation: BlendOperation::Add,
},
}),
write_mask: ColorWrites::ALL,
})],
}),
}
}
}
/// Specializes volumetric fog pipelines for all views with that effect enabled.
#[allow(clippy::too_many_arguments)]
pub fn prepare_volumetric_fog_pipelines(
mut commands: Commands,
pipeline_cache: Res<PipelineCache>,
mut pipelines: ResMut<SpecializedRenderPipelines<VolumetricFogPipeline>>,
volumetric_lighting_pipeline: Res<VolumetricFogPipeline>,
view_targets: Query<
(
Entity,
&ExtractedView,
Has<NormalPrepass>,
Has<DepthPrepass>,
Has<MotionVectorPrepass>,
Has<DeferredPrepass>,
),
With<VolumetricFogSettings>,
>,
msaa: Res<Msaa>,
meshes: Res<RenderAssets<GpuMesh>>,
) {
let plane_mesh = meshes.get(&PLANE_MESH).expect("Plane mesh not found!");
for (entity, view, normal_prepass, depth_prepass, motion_vector_prepass, deferred_prepass) in
view_targets.iter()
{
// Create a mesh pipeline view layout key corresponding to the view.
let mut mesh_pipeline_view_key = MeshPipelineViewLayoutKey::from(*msaa);
mesh_pipeline_view_key.set(MeshPipelineViewLayoutKey::NORMAL_PREPASS, normal_prepass);
mesh_pipeline_view_key.set(MeshPipelineViewLayoutKey::DEPTH_PREPASS, depth_prepass);
mesh_pipeline_view_key.set(
MeshPipelineViewLayoutKey::MOTION_VECTOR_PREPASS,
motion_vector_prepass,
);
mesh_pipeline_view_key.set(
MeshPipelineViewLayoutKey::DEFERRED_PREPASS,
deferred_prepass,
);
let mut textureless_flags = VolumetricFogPipelineKeyFlags::empty();
textureless_flags.set(VolumetricFogPipelineKeyFlags::HDR, view.hdr);
// Specialize the pipeline.
let textureless_pipeline_key = VolumetricFogPipelineKey {
mesh_pipeline_view_key,
vertex_buffer_layout: plane_mesh.layout.clone(),
flags: textureless_flags,
};
let textureless_pipeline_id = pipelines.specialize(
&pipeline_cache,
&volumetric_lighting_pipeline,
textureless_pipeline_key.clone(),
);
let textured_pipeline_id = pipelines.specialize(
&pipeline_cache,
&volumetric_lighting_pipeline,
VolumetricFogPipelineKey {
flags: textureless_pipeline_key.flags
| VolumetricFogPipelineKeyFlags::DENSITY_TEXTURE,
..textureless_pipeline_key
},
);
commands.entity(entity).insert(ViewVolumetricFogPipelines {
textureless: textureless_pipeline_id,
textured: textured_pipeline_id,
});
}
}
/// A system that converts [`VolumetricFogSettings`] into [`VolumetricFogUniform`]s.
pub fn prepare_volumetric_fog_uniforms(
mut commands: Commands,
mut volumetric_lighting_uniform_buffer: ResMut<VolumetricFogUniformBuffer>,
view_targets: Query<(Entity, &ExtractedView, &VolumetricFogSettings)>,
fog_volumes: Query<(Entity, &FogVolume, &GlobalTransform)>,
render_device: Res<RenderDevice>,
render_queue: Res<RenderQueue>,
mut local_from_world_matrices: Local<Vec<Mat4>>,
) {
let Some(mut writer) = volumetric_lighting_uniform_buffer.get_writer(
view_targets.iter().len(),
&render_device,
&render_queue,
) else {
return;
};
// Do this up front to avoid O(n^2) matrix inversion.
local_from_world_matrices.clear();
for (_, _, fog_transform) in fog_volumes.iter() {
local_from_world_matrices.push(fog_transform.compute_matrix().inverse());
}
for (view_entity, extracted_view, volumetric_fog_settings) in view_targets.iter() {
let world_from_view = extracted_view.world_from_view.compute_matrix();
let mut view_fog_volumes = vec![];
for ((_, fog_volume, _), local_from_world) in
fog_volumes.iter().zip(local_from_world_matrices.iter())
{
// Calculate the transforms to and from 1×1×1 local space.
let local_from_view = *local_from_world * world_from_view;
let view_from_local = local_from_view.inverse();
// Determine whether the camera is inside or outside the volume, and
// calculate the clip space transform.
let interior = camera_is_inside_fog_volume(&local_from_view);
let hull_clip_from_local = calculate_fog_volume_clip_from_local_transforms(
interior,
&extracted_view.clip_from_view,
&view_from_local,
);
// Calculate the radius of the sphere that bounds the fog volume.
let bounding_radius = (Mat3A::from_mat4(view_from_local) * Vec3A::splat(0.5)).length();
// Write out our uniform.
let uniform_buffer_offset = writer.write(&VolumetricFogUniform {
clip_from_local: hull_clip_from_local,
uvw_from_world: UVW_FROM_LOCAL * *local_from_world,
far_planes: get_far_planes(&view_from_local),
fog_color: fog_volume.fog_color.to_linear().to_vec3(),
light_tint: fog_volume.light_tint.to_linear().to_vec3(),
ambient_color: volumetric_fog_settings.ambient_color.to_linear().to_vec3(),
ambient_intensity: volumetric_fog_settings.ambient_intensity,
step_count: volumetric_fog_settings.step_count,
bounding_radius,
absorption: fog_volume.absorption,
scattering: fog_volume.scattering,
density: fog_volume.density_factor,
scattering_asymmetry: fog_volume.scattering_asymmetry,
light_intensity: fog_volume.light_intensity,
jitter_strength: volumetric_fog_settings.jitter,
});
view_fog_volumes.push(ViewFogVolume {
uniform_buffer_offset,
exterior: !interior,
density_texture: fog_volume.density_texture.as_ref().map(Handle::id),
});
}
commands
.entity(view_entity)
.insert(ViewVolumetricFog(view_fog_volumes));
}
}
/// A system that marks all view depth textures as readable in shaders.
///
/// The volumetric lighting pass needs to do this, and it doesn't happen by
/// default.
pub fn prepare_view_depth_textures_for_volumetric_fog(
mut view_targets: Query<&mut Camera3d>,
fog_volumes: Query<&VolumetricFogSettings>,
) {
if fog_volumes.is_empty() {
return;
}
for mut camera in view_targets.iter_mut() {
camera.depth_texture_usages.0 |= TextureUsages::TEXTURE_BINDING.bits();
}
}
fn get_far_planes(view_from_local: &Mat4) -> [Vec4; 3] {
let (mut far_planes, mut next_index) = ([Vec4::ZERO; 3], 0);
let view_from_normal_local = Mat3A::from_mat4(*view_from_local);
for &local_normal in &[
Vec3A::X,
Vec3A::NEG_X,
Vec3A::Y,
Vec3A::NEG_Y,
Vec3A::Z,
Vec3A::NEG_Z,
] {
let view_normal = (view_from_normal_local * local_normal).normalize_or_zero();
if view_normal.z <= 0.0 {
continue;
}
let view_position = *view_from_local * (-local_normal * 0.5).extend(1.0);
let plane_coords = view_normal.extend(-view_normal.dot(view_position.xyz().into()));
far_planes[next_index] = plane_coords;
next_index += 1;
if next_index == far_planes.len() {
continue;
}
}
far_planes
}
impl VolumetricFogBindGroupLayoutKey {
/// Creates an appropriate debug description for the bind group layout with
/// these flags.
fn bind_group_layout_description(&self) -> String {
if self.is_empty() {
return "volumetric lighting view bind group layout".to_owned();
}
format!(
"volumetric lighting view bind group layout ({})",
self.iter()
.filter_map(|flag| {
if flag == VolumetricFogBindGroupLayoutKey::DENSITY_TEXTURE {
Some("density texture")
} else if flag == VolumetricFogBindGroupLayoutKey::MULTISAMPLED {
Some("multisampled")
} else {
None
}
})
.collect::<Vec<_>>()
.join(", ")
)
}
}
/// Given the transform from the view to the 1×1×1 cube in local fog volume
/// space, returns true if the camera is inside the volume.
fn camera_is_inside_fog_volume(local_from_view: &Mat4) -> bool {
Vec3A::from(local_from_view.col(3).xyz())
.abs()
.cmple(Vec3A::splat(0.5))
.all()
}
/// Given the local transforms, returns the matrix that transforms model space
/// to clip space.
fn calculate_fog_volume_clip_from_local_transforms(
interior: bool,
clip_from_view: &Mat4,
view_from_local: &Mat4,
) -> Mat4 {
if !interior {
return *clip_from_view * *view_from_local;
}
// If the camera is inside the fog volume, then we'll be rendering a full
// screen quad. The shader will start its raymarch at the fragment depth
// value, however, so we need to make sure that the depth of the full screen
// quad is at the near clip plane `z_near`.
let z_near = clip_from_view.w_axis[2];
Mat4::from_cols(
vec4(z_near, 0.0, 0.0, 0.0),
vec4(0.0, z_near, 0.0, 0.0),
vec4(0.0, 0.0, 0.0, 0.0),
vec4(0.0, 0.0, z_near, z_near),
)
}

View file

@ -2,56 +2,78 @@
// sampling directional light shadow maps.
//
// The overall approach is a combination of the volumetric rendering in [1] and
// the shadow map raymarching in [2]. First, we sample the depth buffer to
// determine how long our ray is. Then we do a raymarch, with physically-based
// calculations at each step to determine how much light was absorbed, scattered
// out, and scattered in. To determine in-scattering, we sample the shadow map
// for the light to determine whether the point was in shadow or not.
// the shadow map raymarching in [2]. First, we raytrace the AABB of the fog
// volume in order to determine how long our ray is. Then we do a raymarch, with
// physically-based calculations at each step to determine how much light was
// absorbed, scattered out, and scattered in. To determine in-scattering, we
// sample the shadow map for the light to determine whether the point was in
// shadow or not.
//
// [1]: https://www.scratchapixel.com/lessons/3d-basic-rendering/volume-rendering-for-developers/intro-volume-rendering.html
//
// [2]: http://www.alexandre-pestana.com/volumetric-lights/
#import bevy_core_pipeline::fullscreen_vertex_shader::FullscreenVertexOutput
#import bevy_pbr::mesh_view_bindings::{lights, view}
#import bevy_pbr::mesh_functions::{get_world_from_local, mesh_position_local_to_clip}
#import bevy_pbr::mesh_view_bindings::{globals, lights, view}
#import bevy_pbr::mesh_view_types::DIRECTIONAL_LIGHT_FLAGS_VOLUMETRIC_BIT
#import bevy_pbr::shadow_sampling::sample_shadow_map_hardware
#import bevy_pbr::shadows::{get_cascade_index, world_to_directional_light_local}
#import bevy_pbr::utils::interleaved_gradient_noise
#import bevy_pbr::view_transformations::{
depth_ndc_to_view_z,
frag_coord_to_ndc,
position_ndc_to_view,
position_ndc_to_world
position_ndc_to_world,
position_view_to_world
}
// The GPU version of [`VolumetricFogSettings`]. See the comments in
// `volumetric_fog/mod.rs` for descriptions of the fields here.
struct VolumetricFog {
clip_from_local: mat4x4<f32>,
uvw_from_world: mat4x4<f32>,
far_planes: array<vec4<f32>, 3>,
fog_color: vec3<f32>,
light_tint: vec3<f32>,
ambient_color: vec3<f32>,
ambient_intensity: f32,
step_count: u32,
max_depth: f32,
bounding_radius: f32,
absorption: f32,
scattering: f32,
density: f32,
density_factor: f32,
scattering_asymmetry: f32,
light_intensity: f32,
jitter_strength: f32,
}
@group(1) @binding(0) var<uniform> volumetric_fog: VolumetricFog;
@group(1) @binding(1) var color_texture: texture_2d<f32>;
@group(1) @binding(2) var color_sampler: sampler;
#ifdef MULTISAMPLED
@group(1) @binding(3) var depth_texture: texture_depth_multisampled_2d;
@group(1) @binding(1) var depth_texture: texture_depth_multisampled_2d;
#else
@group(1) @binding(3) var depth_texture: texture_depth_2d;
@group(1) @binding(1) var depth_texture: texture_depth_2d;
#endif
#ifdef DENSITY_TEXTURE
@group(1) @binding(2) var density_texture: texture_3d<f32>;
@group(1) @binding(3) var density_sampler: sampler;
#endif // DENSITY_TEXTURE
// 1 / (4π)
const FRAC_4_PI: f32 = 0.07957747154594767;
struct Vertex {
@builtin(instance_index) instance_index: u32,
@location(0) position: vec3<f32>,
}
@vertex
fn vertex(vertex: Vertex) -> @builtin(position) vec4<f32> {
return volumetric_fog.clip_from_local * vec4<f32>(vertex.position, 1.0);
}
// The common Henyey-Greenstein asymmetric phase function [1] [2].
//
// This determines how much light goes toward the viewer as opposed to away from
@ -68,80 +90,113 @@ fn henyey_greenstein(neg_LdotV: f32) -> f32 {
}
@fragment
fn fragment(in: FullscreenVertexOutput) -> @location(0) vec4<f32> {
fn fragment(@builtin(position) position: vec4<f32>) -> @location(0) vec4<f32> {
// Unpack the `volumetric_fog` settings.
let uvw_from_world = volumetric_fog.uvw_from_world;
let fog_color = volumetric_fog.fog_color;
let ambient_color = volumetric_fog.ambient_color;
let ambient_intensity = volumetric_fog.ambient_intensity;
let step_count = volumetric_fog.step_count;
let max_depth = volumetric_fog.max_depth;
let bounding_radius = volumetric_fog.bounding_radius;
let absorption = volumetric_fog.absorption;
let scattering = volumetric_fog.scattering;
let density = volumetric_fog.density;
let density_factor = volumetric_fog.density_factor;
let light_tint = volumetric_fog.light_tint;
let light_intensity = volumetric_fog.light_intensity;
let jitter_strength = volumetric_fog.jitter_strength;
// Unpack the view.
let exposure = view.exposure;
// Sample the depth. If this is multisample, just use sample 0; this is
// approximate but good enough.
let frag_coord = in.position;
let depth = textureLoad(depth_texture, vec2<i32>(frag_coord.xy), 0);
// Sample the depth to put an upper bound on the length of the ray (as we
// shouldn't trace through solid objects). If this is multisample, just use
// sample 0; this is approximate but good enough.
let frag_coord = position;
let ndc_end_depth_from_buffer = textureLoad(depth_texture, vec2<i32>(frag_coord.xy), 0);
let view_end_depth_from_buffer = -position_ndc_to_view(
frag_coord_to_ndc(vec4(position.xy, ndc_end_depth_from_buffer, 1.0))).z;
// Calculate the start position of the ray. Since we're only rendering front
// faces of the AABB, this is the current fragment's depth.
let view_start_pos = position_ndc_to_view(frag_coord_to_ndc(frag_coord));
// Calculate the end position of the ray. This requires us to raytrace the
// three back faces of the AABB to find the one that our ray intersects.
var end_depth_view = 0.0;
for (var plane_index = 0; plane_index < 3; plane_index += 1) {
let plane = volumetric_fog.far_planes[plane_index];
let other_plane_a = volumetric_fog.far_planes[(plane_index + 1) % 3];
let other_plane_b = volumetric_fog.far_planes[(plane_index + 2) % 3];
// Calculate the intersection of the ray and the plane. The ray must
// intersect in front of us (t > 0).
let t = -plane.w / dot(plane.xyz, view_start_pos.xyz);
if (t < 0.0) {
continue;
}
let hit_pos = view_start_pos.xyz * t;
// The intersection point must be in front of the other backfaces.
let other_sides = vec2(
dot(vec4(hit_pos, 1.0), other_plane_a) >= 0.0,
dot(vec4(hit_pos, 1.0), other_plane_b) >= 0.0
);
// If those tests pass, we found our backface.
if (all(other_sides)) {
end_depth_view = -hit_pos.z;
break;
}
}
// Starting at the end depth, which we got above, figure out how long the
// ray we want to trace is and the length of each increment.
let end_depth = min(
max_depth,
-position_ndc_to_view(frag_coord_to_ndc(vec4(in.position.xy, depth, 1.0))).z
);
let step_size = end_depth / f32(step_count);
end_depth_view = min(end_depth_view, view_end_depth_from_buffer);
// We assume world and view have the same scale here.
let start_depth_view = -depth_ndc_to_view_z(frag_coord.z);
let ray_length_view = abs(end_depth_view - start_depth_view);
let inv_step_count = 1.0 / f32(step_count);
let step_size_world = ray_length_view * inv_step_count;
let directional_light_count = lights.n_directional_lights;
// Calculate the ray origin (`Ro`) and the ray direction (`Rd`) in NDC,
// view, and world coordinates.
let Rd_ndc = vec3(frag_coord_to_ndc(in.position).xy, 1.0);
let Rd_ndc = vec3(frag_coord_to_ndc(position).xy, 1.0);
let Rd_view = normalize(position_ndc_to_view(Rd_ndc));
let Ro_world = view.world_position;
let Rd_world = normalize(position_ndc_to_world(Rd_ndc) - Ro_world);
var Ro_world = position_view_to_world(view_start_pos.xyz);
let Rd_world = normalize(position_ndc_to_world(Rd_ndc) - view.world_position);
// Offset by jitter.
let jitter = interleaved_gradient_noise(position.xy, globals.frame_count) * jitter_strength;
Ro_world += Rd_world * jitter;
// Use Beer's law [1] [2] to calculate the maximum amount of light that each
// directional light could contribute, and modulate that value by the light
// tint and fog color. (The actual value will in turn be modulated by the
// phase according to the Henyey-Greenstein formula.)
//
// We use a bit of a hack here. Conceptually, directional lights are
// infinitely far away. But, if we modeled exactly that, then directional
// lights would never contribute any light to the fog, because an
// infinitely-far directional light combined with an infinite amount of fog
// would result in complete absorption of the light. So instead we pretend
// that the directional light is `max_depth` units away and do the
// calculation in those terms. Because the fake distance to the directional
// light is a constant, this lets us perform the calculation once up here
// instead of marching secondary rays toward the light during the
// raymarching step, which improves performance dramatically.
//
// [1]: https://www.scratchapixel.com/lessons/3d-basic-rendering/volume-rendering-for-developers/intro-volume-rendering.html
//
// [2]: https://en.wikipedia.org/wiki/Beer%E2%80%93Lambert_law
let light_attenuation = exp(-density * max_depth * (absorption + scattering));
let light_factors_per_step = fog_color * light_tint * light_attenuation * scattering *
density * step_size * light_intensity * exposure;
// Use Beer's law again to accumulate the ambient light all along the path.
var accumulated_color = exp(-end_depth * (absorption + scattering)) * ambient_color *
var accumulated_color = exp(-ray_length_view * (absorption + scattering)) * ambient_color *
ambient_intensity;
// Pre-calculate absorption (amount of light absorbed by the fog) and
// out-scattering (amount of light the fog scattered away). This is the same
// amount for every step.
let sample_attenuation = exp(-step_size * density * (absorption + scattering));
// This is the amount of the background that shows through. We're actually
// going to recompute this over and over again for each directional light,
// coming up with the same values each time.
var background_alpha = 1.0;
// If we have a density texture, transform to its local space.
#ifdef DENSITY_TEXTURE
let Ro_uvw = (uvw_from_world * vec4(Ro_world, 1.0)).xyz;
let Rd_step_uvw = mat3x3(uvw_from_world[0].xyz, uvw_from_world[1].xyz, uvw_from_world[2].xyz) *
(Rd_world * step_size_world);
#endif // DENSITY_TEXTURE
for (var light_index = 0u; light_index < directional_light_count; light_index += 1u) {
// Volumetric lights are all sorted first, so the first time we come to
// a non-volumetric light, we know we've seen them all.
@ -158,10 +213,6 @@ fn fragment(in: FullscreenVertexOutput) -> @location(0) vec4<f32> {
let neg_LdotV = dot(normalize((*light).direction_to_light.xyz), Rd_world);
let phase = henyey_greenstein(neg_LdotV);
// Modulate the factor we calculated above by the phase, fog color,
// light color, light tint.
let light_color_per_step = (*light).color.rgb * phase * light_factors_per_step;
// Reset `background_alpha` for a new raymarch.
background_alpha = 1.0;
@ -173,8 +224,27 @@ fn fragment(in: FullscreenVertexOutput) -> @location(0) vec4<f32> {
}
// Calculate where we are in the ray.
let P_world = Ro_world + Rd_world * f32(step) * step_size;
let P_view = Rd_view * f32(step) * step_size;
let P_world = Ro_world + Rd_world * f32(step) * step_size_world;
let P_view = Rd_view * f32(step) * step_size_world;
var density = density_factor;
#ifdef DENSITY_TEXTURE
// Take the density texture into account, if there is one.
//
// The uvs should never go outside the (0, 0, 0) to (1, 1, 1) box,
// but sometimes due to floating point error they can. Handle this
// case.
let P_uvw = Ro_uvw + Rd_step_uvw * f32(step);
if (all(P_uvw >= vec3(0.0)) && all(P_uvw <= vec3(1.0))) {
density *= textureSample(density_texture, density_sampler, P_uvw).r;
} else {
density = 0.0;
}
#endif // DENSITY_TEXTURE
// Calculate absorption (amount of light absorbed by the fog) and
// out-scattering (amount of light the fog scattered away).
let sample_attenuation = exp(-step_size_world * density * (absorption + scattering));
// Process absorption and out-scattering.
background_alpha *= sample_attenuation;
@ -205,6 +275,14 @@ fn fragment(in: FullscreenVertexOutput) -> @location(0) vec4<f32> {
}
if (local_light_attenuation != 0.0) {
let light_attenuation = exp(-density * bounding_radius * (absorption + scattering));
let light_factors_per_step = fog_color * light_tint * light_attenuation *
scattering * density * step_size_world * light_intensity * exposure;
// Modulate the factor we calculated above by the phase, fog color,
// light color, light tint.
let light_color_per_step = (*light).color.rgb * phase * light_factors_per_step;
// Accumulate the light.
accumulated_color += light_color_per_step * local_light_attenuation *
background_alpha;
@ -212,7 +290,7 @@ fn fragment(in: FullscreenVertexOutput) -> @location(0) vec4<f32> {
}
}
// We're done! Blend between the source color and the lit fog color.
let source = textureSample(color_texture, color_sampler, in.uv);
return vec4(source.rgb * background_alpha + accumulated_color, source.a);
// We're done! Return the color with alpha so it can be blended onto the
// render target.
return vec4(accumulated_color, 1.0 - background_alpha);
}

View file

@ -0,0 +1,89 @@
//! Demonstrates fog volumes with voxel density textures.
//!
//! We render the Stanford bunny as a fog volume. Parts of the bunny become
//! lighter and darker as the camera rotates. This is physically-accurate
//! behavior that results from the scattering and absorption of the directional
//! light.
use bevy::{
math::vec3,
pbr::{FogVolume, VolumetricFogSettings, VolumetricLight},
prelude::*,
};
/// Entry point.
fn main() {
App::new()
.add_plugins(DefaultPlugins.set(WindowPlugin {
primary_window: Some(Window {
title: "Bevy Fog Volumes Example".into(),
..default()
}),
..default()
}))
.insert_resource(AmbientLight::NONE)
.add_systems(Startup, setup)
.add_systems(Update, rotate_camera)
.run();
}
/// Spawns all the objects in the scene.
fn setup(mut commands: Commands, asset_server: Res<AssetServer>) {
// Spawn a fog volume with a voxelized version of the Stanford bunny.
commands
.spawn(SpatialBundle {
visibility: Visibility::Visible,
transform: Transform::from_xyz(0.0, 0.5, 0.0),
..default()
})
.insert(FogVolume {
density_texture: Some(asset_server.load("volumes/bunny.ktx2")),
density_factor: 1.0,
// Scatter as much of the light as possible, to brighten the bunny
// up.
scattering: 1.0,
..default()
});
// Spawn a bright directional light that illuminates the fog well.
commands
.spawn(DirectionalLightBundle {
transform: Transform::from_xyz(1.0, 1.0, -0.3).looking_at(vec3(0.0, 0.5, 0.0), Vec3::Y),
directional_light: DirectionalLight {
shadows_enabled: true,
illuminance: 32000.0,
..default()
},
..default()
})
// Make sure to add this for the light to interact with the fog.
.insert(VolumetricLight);
// Spawn a camera.
commands
.spawn(Camera3dBundle {
transform: Transform::from_xyz(-0.75, 1.0, 2.0)
.looking_at(vec3(0.0, 0.0, 0.0), Vec3::Y),
camera: Camera {
hdr: true,
..default()
},
..default()
})
.insert(VolumetricFogSettings {
// Make this relatively high in order to increase the fog quality.
step_count: 64,
// Disable ambient light.
ambient_intensity: 0.0,
..default()
});
}
/// Rotates the camera a bit every frame.
fn rotate_camera(mut cameras: Query<&mut Transform, With<Camera3d>>) {
for mut camera_transform in cameras.iter_mut() {
*camera_transform =
Transform::from_translation(Quat::from_rotation_y(0.01) * camera_transform.translation)
.looking_at(vec3(0.0, 0.5, 0.0), Vec3::Y);
}
}

View file

@ -3,7 +3,7 @@
use bevy::{
core_pipeline::{bloom::BloomSettings, tonemapping::Tonemapping, Skybox},
math::vec3,
pbr::{VolumetricFogSettings, VolumetricLight},
pbr::{FogVolumeBundle, VolumetricFogSettings, VolumetricLight},
prelude::*,
};
@ -36,7 +36,7 @@ fn setup(mut commands: Commands, asset_server: Res<AssetServer>) {
..default()
});
// Spawn the camera. Add the volumetric fog.
// Spawn the camera.
commands
.spawn(Camera3dBundle {
transform: Transform::from_xyz(-1.7, 1.5, 4.5)
@ -60,6 +60,12 @@ fn setup(mut commands: Commands, asset_server: Res<AssetServer>) {
..default()
});
// Add the fog volume.
commands.spawn(FogVolumeBundle {
transform: Transform::from_scale(Vec3::splat(35.0)),
..default()
});
// Add the help text.
commands.spawn(
TextBundle {

View file

@ -141,6 +141,7 @@ Example | Description
[Deferred Rendering](../examples/3d/deferred_rendering.rs) | Renders meshes with both forward and deferred pipelines
[Depth of field](../examples/3d/depth_of_field.rs) | Demonstrates depth of field
[Fog](../examples/3d/fog.rs) | A scene showcasing the distance fog effect
[Fog volumes](../examples/3d/fog_volumes.rs) | Demonstrates fog volumes
[Generate Custom Mesh](../examples/3d/generate_custom_mesh.rs) | Simple showcase of how to generate a custom mesh with a custom texture
[Irradiance Volumes](../examples/3d/irradiance_volumes.rs) | Demonstrates irradiance volumes
[Lighting](../examples/3d/lighting.rs) | Illustrates various lighting options in a simple scene