Implement fast depth of field as a postprocessing effect. (#13009)

This commit implements the [depth of field] effect, simulating the blur
of objects out of focus of the virtual lens. Either the [hexagonal
bokeh] effect or a faster Gaussian blur may be used. In both cases, the
implementation is a simple separable two-pass convolution. This is not
the most physically-accurate real-time bokeh technique that exists;
Unreal Engine has [a more accurate implementation] of "cinematic depth
of field" from 2018. However, it's simple, and most engines provide
something similar as a fast option, often called "mobile" depth of
field.

The general approach is outlined in [a blog post from 2017]. We take
advantage of the fact that both Gaussian blurs and hexagonal bokeh blurs
are *separable*. This means that their 2D kernels can be reduced to a
small number of 1D kernels applied one after another, asymptotically
reducing the amount of work that has to be done. Gaussian blurs can be
accomplished by blurring horizontally and then vertically, while
hexagonal bokeh blurs can be done with a vertical blur plus a diagonal
blur, plus two diagonal blurs. In both cases, only two passes are
needed. Bokeh requires the first pass to have a second render target and
requires two subpasses in the second pass, which decreases its
performance relative to the Gaussian blur.

The bokeh blur is generally more aesthetically pleasing than the
Gaussian blur, as it simulates the effect of a camera more accurately.
The shape of the bokeh circles are determined by the number of blades of
the aperture. In our case, we use a hexagon, which is usually considered
specific to lower-quality cameras. (This is a downside of the fast
hexagon approach compared to the higher-quality approaches.) The blur
amount is generally specified by the [f-number], which we use to compute
the focal length from the film size and FOV. By default, we simulate
standard cinematic cameras of f/1 and [Super 35]. The developer can
customize these values as desired.

A new example has been added to demonstrate depth of field. It allows
customization of the mode (Gaussian vs. bokeh), focal distance and
f-numbers. The test scene is inspired by a [blog post on depth of field
in Unity]; however, the effect is implemented in a completely different
way from that blog post, and all the assets (textures, etc.) are
original.

Bokeh depth of field:
![Screenshot 2024-04-17
152535](https://github.com/bevyengine/bevy/assets/157897/702f0008-1c8a-4cf3-b077-4110f8c46584)

Gaussian depth of field:
![Screenshot 2024-04-17
152542](https://github.com/bevyengine/bevy/assets/157897/f4ece47a-520e-4483-a92d-f4fa760795d3)

No depth of field:
![Screenshot 2024-04-17
152547](https://github.com/bevyengine/bevy/assets/157897/9444e6aa-fcae-446c-b66b-89469f1a1325)

[depth of field]: https://en.wikipedia.org/wiki/Depth_of_field

[hexagonal bokeh]:
https://colinbarrebrisebois.com/2017/04/18/hexagonal-bokeh-blur-revisited/

[a more accurate implementation]:
https://epicgames.ent.box.com/s/s86j70iamxvsuu6j35pilypficznec04

[a blog post from 2017]:
https://colinbarrebrisebois.com/2017/04/18/hexagonal-bokeh-blur-revisited/

[f-number]: https://en.wikipedia.org/wiki/F-number

[Super 35]: https://en.wikipedia.org/wiki/Super_35

[blog post on depth of field in Unity]:
https://catlikecoding.com/unity/tutorials/advanced-rendering/depth-of-field/

## Changelog

### Added

* A depth of field postprocessing effect is now available, to simulate
objects being out of focus of the camera. To use it, add
`DepthOfFieldSettings` to an entity containing a `Camera3d` component.

---------

Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
Co-authored-by: Bram Buurlage <brambuurlage@gmail.com>
This commit is contained in:
Patrick Walton 2024-05-13 13:23:56 -05:00 committed by GitHub
parent 3f5a090b1b
commit df31b808c3
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
12 changed files with 1518 additions and 2 deletions

View file

@ -3026,6 +3026,17 @@ description = "Demonstrates the clearcoat PBR feature"
category = "3D Rendering"
wasm = false
[[example]]
name = "depth_of_field"
path = "examples/3d/depth_of_field.rs"
doc-scrape-examples = true
[package.metadata.example.depth_of_field]
name = "Depth of field"
description = "Demonstrates depth of field"
category = "3D Rendering"
wasm = false
[profile.wasm-release]
inherits = "release"
opt-level = "z"

View file

@ -37,6 +37,7 @@ serde = { version = "1", features = ["derive"] }
bitflags = "2.3"
radsort = "0.1"
nonmax = "0.5"
smallvec = "1"
thiserror = "1.0"
[lints]

View file

@ -29,6 +29,7 @@ pub mod graph {
MotionBlur,
Bloom,
AutoExposure,
DepthOfField,
Tonemapping,
Fxaa,
Upscaling,
@ -80,6 +81,7 @@ use crate::{
AlphaMask3dDeferred, Opaque3dDeferred, DEFERRED_LIGHTING_PASS_ID_FORMAT,
DEFERRED_PREPASS_FORMAT,
},
dof::DepthOfFieldNode,
prepass::{
node::PrepassNode, AlphaMask3dPrepass, DeferredPrepass, DepthPrepass, MotionVectorPrepass,
NormalPrepass, Opaque3dPrepass, OpaqueNoLightmap3dBinKey, ViewPrepassTextures,
@ -152,6 +154,7 @@ impl Plugin for Core3dPlugin {
Node3d::MainTransparentPass,
)
.add_render_graph_node::<EmptyNode>(Core3d, Node3d::EndMainPass)
.add_render_graph_node::<ViewNodeRunner<DepthOfFieldNode>>(Core3d, Node3d::DepthOfField)
.add_render_graph_node::<ViewNodeRunner<TonemappingNode>>(Core3d, Node3d::Tonemapping)
.add_render_graph_node::<EmptyNode>(Core3d, Node3d::EndMainPassPostProcessing)
.add_render_graph_node::<ViewNodeRunner<UpscalingNode>>(Core3d, Node3d::Upscaling)

View file

@ -0,0 +1,301 @@
// Performs depth of field postprocessing, with both Gaussian and bokeh kernels.
//
// Gaussian blur is performed as a separable convolution: first blurring in the
// X direction, and then in the Y direction. This is asymptotically more
// efficient than performing a 2D convolution.
//
// The Bokeh blur uses a similar, but more complex, separable convolution
// technique. The algorithm is described in Colin Barré-Brisebois, "Hexagonal
// Bokeh Blur Revisited" [1]. It's motivated by the observation that we can use
// separable convolutions not only to produce boxes but to produce
// parallelograms. Thus, by performing three separable convolutions in sequence,
// we can produce a hexagonal shape. The first and second convolutions are done
// simultaneously using multiple render targets to cut the total number of
// passes down to two.
//
// [1]: https://colinbarrebrisebois.com/2017/04/18/hexagonal-bokeh-blur-revisited-part-2-improved-2-pass-version/
#import bevy_core_pipeline::fullscreen_vertex_shader::FullscreenVertexOutput
#import bevy_pbr::mesh_view_bindings::view
#import bevy_pbr::view_transformations::depth_ndc_to_view_z
#import bevy_render::view::View
// Parameters that control the depth of field effect. See
// `bevy_core_pipeline::dof::DepthOfFieldUniforms` for information on what these
// parameters mean.
struct DepthOfFieldParams {
/// The distance in meters to the location in focus.
focal_distance: f32,
/// The [focal length]. Physically speaking, this represents "the distance
/// from the center of the lens to the principal foci of the lens". The
/// default value, 50 mm, is considered representative of human eyesight.
/// Real-world lenses range from anywhere from 5 mm for "fisheye" lenses to
/// 2000 mm for "super-telephoto" lenses designed for very distant objects.
///
/// The higher the value, the more blurry objects not in focus will be.
///
/// [focal length]: https://en.wikipedia.org/wiki/Focal_length
focal_length: f32,
/// The premultiplied factor that we scale the circle of confusion by.
///
/// This is calculated as `focal_length² / (sensor_height * aperture_f_stops)`.
coc_scale_factor: f32,
/// The maximum diameter, in pixels, that we allow a circle of confusion to be.
///
/// A circle of confusion essentially describes the size of a blur.
///
/// This value is nonphysical but is useful for avoiding pathologically-slow
/// behavior.
max_circle_of_confusion_diameter: f32,
/// The depth value that we clamp distant objects to. See the comment in
/// [`DepthOfFieldSettings`] for more information.
max_depth: f32,
/// Padding.
pad_a: u32,
/// Padding.
pad_b: u32,
/// Padding.
pad_c: u32,
}
// The first bokeh pass outputs to two render targets. We declare them here.
struct DualOutput {
// The vertical output.
@location(0) output_0: vec4<f32>,
// The diagonal output.
@location(1) output_1: vec4<f32>,
}
// @group(0) @binding(0) is `mesh_view_bindings::view`.
// The depth texture for the main view.
#ifdef MULTISAMPLED
@group(0) @binding(1) var depth_texture: texture_depth_multisampled_2d;
#else // MULTISAMPLED
@group(0) @binding(1) var depth_texture: texture_depth_2d;
#endif // MULTISAMPLED
// The main color texture.
@group(0) @binding(2) var color_texture_a: texture_2d<f32>;
// The auxiliary color texture that we're sampling from. This is only used as
// part of the second bokeh pass.
#ifdef DUAL_INPUT
@group(0) @binding(3) var color_texture_b: texture_2d<f32>;
#endif // DUAL_INPUT
// The global uniforms, representing data backed by buffers shared among all
// views in the scene.
// The parameters that control the depth of field effect.
@group(1) @binding(0) var<uniform> dof_params: DepthOfFieldParams;
// The sampler that's used to fetch texels from the source color buffer.
@group(1) @binding(1) var color_texture_sampler: sampler;
// cos(-30°), used for the bokeh blur.
const COS_NEG_FRAC_PI_6: f32 = 0.8660254037844387;
// sin(-30°), used for the bokeh blur.
const SIN_NEG_FRAC_PI_6: f32 = -0.5;
// cos(-150°), used for the bokeh blur.
const COS_NEG_FRAC_PI_5_6: f32 = -0.8660254037844387;
// sin(-150°), used for the bokeh blur.
const SIN_NEG_FRAC_PI_5_6: f32 = -0.5;
// Calculates and returns the diameter (not the radius) of the [circle of
// confusion].
//
// [circle of confusion]: https://en.wikipedia.org/wiki/Circle_of_confusion
fn calculate_circle_of_confusion(in_frag_coord: vec4<f32>) -> f32 {
// Unpack the depth of field parameters.
let focus = dof_params.focal_distance;
let f = dof_params.focal_length;
let scale = dof_params.coc_scale_factor;
let max_coc_diameter = dof_params.max_circle_of_confusion_diameter;
// Sample the depth.
let frag_coord = vec2<i32>(floor(in_frag_coord.xy));
let raw_depth = textureLoad(depth_texture, frag_coord, 0);
let depth = min(-depth_ndc_to_view_z(raw_depth), dof_params.max_depth);
// Calculate the circle of confusion.
//
// This is just the formula from Wikipedia [1].
//
// [1]: https://en.wikipedia.org/wiki/Circle_of_confusion#Determining_a_circle_of_confusion_diameter_from_the_object_field
let candidate_coc = scale * abs(depth - focus) / (depth * (focus - f));
let framebuffer_size = vec2<f32>(textureDimensions(color_texture_a));
return clamp(candidate_coc * framebuffer_size.y, 0.0, max_coc_diameter);
}
// Performs a single direction of the separable Gaussian blur kernel.
//
// * `frag_coord` is the screen-space pixel coordinate of the fragment (i.e. the
// `position` input to the fragment).
//
// * `coc` is the diameter (not the radius) of the circle of confusion for this
// fragment.
//
// * `frag_offset` is the vector, in screen-space units, from one sample to the
// next. For a horizontal blur this will be `vec2(1.0, 0.0)`; for a vertical
// blur this will be `vec2(0.0, 1.0)`.
//
// Returns the resulting color of the fragment.
fn gaussian_blur(frag_coord: vec4<f32>, coc: f32, frag_offset: vec2<f32>) -> vec4<f32> {
// Usually σ (the standard deviation) is half the radius, and the radius is
// half the CoC. So we multiply by 0.25.
let sigma = coc * 0.25;
// 1.5σ is a good, somewhat aggressive default for supportthe number of
// texels on each side of the center that we process.
let support = i32(ceil(sigma * 1.5));
let uv = frag_coord.xy / vec2<f32>(textureDimensions(color_texture_a));
let offset = frag_offset / vec2<f32>(textureDimensions(color_texture_a));
// The probability density function of the Gaussian blur is (up to constant factors) `exp(-1 / 2σ² *
// x²). We precalculate the constant factor here to avoid having to
// calculate it in the inner loop.
let exp_factor = -1.0 / (2.0 * sigma * sigma);
// Accumulate samples on both sides of the current texel. Go two at a time,
// taking advantage of bilinear filtering.
var sum = textureSampleLevel(color_texture_a, color_texture_sampler, uv, 0.0).rgb;
var weight_sum = 1.0;
for (var i = 1; i <= support; i += 2) {
// This is a well-known trick to reduce the number of needed texture
// samples by a factor of two. We seek to accumulate two adjacent
// samples c and c with weights w and w respectively, with a single
// texture sample at a carefully chosen location. Observe that:
//
// k lerp(c, c, t) = wc + wc
//
// w
// if k = w + w and t =
// w + w
//
// Therefore, if we sample at a distance of t = w / (w + w) texels in
// between the two texel centers and scale by k = w + w afterward, we
// effectively evaluate wc + wc with a single texture lookup.
let w0 = exp(exp_factor * f32(i) * f32(i));
let w1 = exp(exp_factor * f32(i + 1) * f32(i + 1));
let uv_offset = offset * (f32(i) + w1 / (w0 + w1));
let weight = w0 + w1;
sum += (
textureSampleLevel(color_texture_a, color_texture_sampler, uv + uv_offset, 0.0).rgb +
textureSampleLevel(color_texture_a, color_texture_sampler, uv - uv_offset, 0.0).rgb
) * weight;
weight_sum += weight * 2.0;
}
return vec4(sum / weight_sum, 1.0);
}
// Performs a box blur in a single direction, sampling `color_texture_a`.
//
// * `frag_coord` is the screen-space pixel coordinate of the fragment (i.e. the
// `position` input to the fragment).
//
// * `coc` is the diameter (not the radius) of the circle of confusion for this
// fragment.
//
// * `frag_offset` is the vector, in screen-space units, from one sample to the
// next. This need not be horizontal or vertical.
fn box_blur_a(frag_coord: vec4<f32>, coc: f32, frag_offset: vec2<f32>) -> vec4<f32> {
let support = i32(round(coc * 0.5));
let uv = frag_coord.xy / vec2<f32>(textureDimensions(color_texture_a));
let offset = frag_offset / vec2<f32>(textureDimensions(color_texture_a));
// Accumulate samples in a single direction.
var sum = vec3(0.0);
for (var i = 0; i <= support; i += 1) {
sum += textureSampleLevel(
color_texture_a, color_texture_sampler, uv + offset * f32(i), 0.0).rgb;
}
return vec4(sum / vec3(1.0 + f32(support)), 1.0);
}
// Performs a box blur in a single direction, sampling `color_texture_b`.
//
// * `frag_coord` is the screen-space pixel coordinate of the fragment (i.e. the
// `position` input to the fragment).
//
// * `coc` is the diameter (not the radius) of the circle of confusion for this
// fragment.
//
// * `frag_offset` is the vector, in screen-space units, from one sample to the
// next. This need not be horizontal or vertical.
#ifdef DUAL_INPUT
fn box_blur_b(frag_coord: vec4<f32>, coc: f32, frag_offset: vec2<f32>) -> vec4<f32> {
let support = i32(round(coc * 0.5));
let uv = frag_coord.xy / vec2<f32>(textureDimensions(color_texture_b));
let offset = frag_offset / vec2<f32>(textureDimensions(color_texture_b));
// Accumulate samples in a single direction.
var sum = vec3(0.0);
for (var i = 0; i <= support; i += 1) {
sum += textureSampleLevel(
color_texture_b, color_texture_sampler, uv + offset * f32(i), 0.0).rgb;
}
return vec4(sum / vec3(1.0 + f32(support)), 1.0);
}
#endif
// Calculates the horizontal component of the separable Gaussian blur.
@fragment
fn gaussian_horizontal(in: FullscreenVertexOutput) -> @location(0) vec4<f32> {
let coc = calculate_circle_of_confusion(in.position);
return gaussian_blur(in.position, coc, vec2(1.0, 0.0));
}
// Calculates the vertical component of the separable Gaussian blur.
@fragment
fn gaussian_vertical(in: FullscreenVertexOutput) -> @location(0) vec4<f32> {
let coc = calculate_circle_of_confusion(in.position);
return gaussian_blur(in.position, coc, vec2(0.0, 1.0));
}
// Calculates the vertical and first diagonal components of the separable
// hexagonal bokeh blur.
//
//
//
//
//
//
@fragment
fn bokeh_pass_0(in: FullscreenVertexOutput) -> DualOutput {
let coc = calculate_circle_of_confusion(in.position);
let vertical = box_blur_a(in.position, coc, vec2(0.0, 1.0));
let diagonal = box_blur_a(in.position, coc, vec2(COS_NEG_FRAC_PI_6, SIN_NEG_FRAC_PI_6));
// Note that the diagonal part is pre-mixed with the vertical component.
var output: DualOutput;
output.output_0 = vertical;
output.output_1 = mix(vertical, diagonal, 0.5);
return output;
}
// Calculates the second diagonal components of the separable hexagonal bokeh
// blur.
//
//
//
//
#ifdef DUAL_INPUT
@fragment
fn bokeh_pass_1(in: FullscreenVertexOutput) -> @location(0) vec4<f32> {
let coc = calculate_circle_of_confusion(in.position);
let output_0 = box_blur_a(in.position, coc, vec2(COS_NEG_FRAC_PI_6, SIN_NEG_FRAC_PI_6));
let output_1 = box_blur_b(in.position, coc, vec2(COS_NEG_FRAC_PI_5_6, SIN_NEG_FRAC_PI_5_6));
return mix(output_0, output_1, 0.5);
}
#endif

View file

@ -0,0 +1,907 @@
//! Depth of field, a postprocessing effect that simulates camera focus.
//!
//! By default, Bevy renders all objects in full focus: regardless of depth, all
//! objects are rendered perfectly sharp (up to output resolution). Real lenses,
//! however, can only focus on objects at a specific distance. The distance
//! between the nearest and furthest objects that are in focus is known as
//! [depth of field], and this term is used more generally in computer graphics
//! to refer to the effect that simulates focus of lenses.
//!
//! Attaching [`DepthOfFieldSettings`] to a camera causes Bevy to simulate the
//! focus of a camera lens. Generally, Bevy's implementation of depth of field
//! is optimized for speed instead of physical accuracy. Nevertheless, the depth
//! of field effect in Bevy is based on physical parameters.
//!
//! [Depth of field]: https://en.wikipedia.org/wiki/Depth_of_field
use std::f32::INFINITY;
use bevy_app::{App, Plugin};
use bevy_asset::{load_internal_asset, Handle};
use bevy_derive::{Deref, DerefMut};
use bevy_ecs::{
component::Component,
entity::Entity,
query::{QueryItem, With},
schedule::IntoSystemConfigs as _,
system::{lifetimeless::Read, Commands, Query, Res, ResMut, Resource},
world::{FromWorld, World},
};
use bevy_render::{
camera::{PhysicalCameraParameters, Projection},
extract_component::{ComponentUniforms, DynamicUniformIndex, UniformComponentPlugin},
render_graph::{
NodeRunError, RenderGraphApp as _, RenderGraphContext, ViewNode, ViewNodeRunner,
},
render_resource::{
binding_types::{
sampler, texture_2d, texture_depth_2d, texture_depth_2d_multisampled, uniform_buffer,
},
BindGroup, BindGroupEntries, BindGroupLayout, BindGroupLayoutEntries,
CachedRenderPipelineId, ColorTargetState, ColorWrites, FilterMode, FragmentState, LoadOp,
Operations, PipelineCache, RenderPassColorAttachment, RenderPassDescriptor,
RenderPipelineDescriptor, Sampler, SamplerBindingType, SamplerDescriptor, Shader,
ShaderStages, ShaderType, SpecializedRenderPipeline, SpecializedRenderPipelines, StoreOp,
TextureDescriptor, TextureDimension, TextureFormat, TextureSampleType, TextureUsages,
},
renderer::{RenderContext, RenderDevice},
texture::{BevyDefault, CachedTexture, TextureCache},
view::{
prepare_view_targets, ExtractedView, Msaa, ViewDepthTexture, ViewTarget, ViewUniform,
ViewUniformOffset, ViewUniforms,
},
Extract, ExtractSchedule, Render, RenderApp, RenderSet,
};
use bevy_utils::{info_once, prelude::default, warn_once};
use smallvec::SmallVec;
use crate::{
core_3d::{
graph::{Core3d, Node3d},
Camera3d,
},
fullscreen_vertex_shader::fullscreen_shader_vertex_state,
};
const DOF_SHADER_HANDLE: Handle<Shader> = Handle::weak_from_u128(2031861180739216043);
/// A plugin that adds support for the depth of field effect to Bevy.
pub struct DepthOfFieldPlugin;
/// Depth of field settings.
#[derive(Component, Clone, Copy)]
pub struct DepthOfFieldSettings {
/// The appearance of the effect.
pub mode: DepthOfFieldMode,
/// The distance in meters to the location in focus.
pub focal_distance: f32,
/// The height of the [image sensor format] in meters.
///
/// Focal length is derived from the FOV and this value. The default is
/// 18.66mm, matching the [Super 35] format, which is popular in cinema.
///
/// [image sensor format]: https://en.wikipedia.org/wiki/Image_sensor_format
///
/// [Super 35]: https://en.wikipedia.org/wiki/Super_35
pub sensor_height: f32,
/// Along with the focal length, controls how much objects not in focus are
/// blurred.
pub aperture_f_stops: f32,
/// The maximum diameter, in pixels, that we allow a circle of confusion to be.
///
/// A circle of confusion essentially describes the size of a blur.
///
/// This value is nonphysical but is useful for avoiding pathologically-slow
/// behavior.
pub max_circle_of_confusion_diameter: f32,
/// Objects are never considered to be farther away than this distance as
/// far as depth of field is concerned, even if they actually are.
///
/// This is primarily useful for skyboxes and background colors. The Bevy
/// renderer considers them to be infinitely far away. Without this value,
/// that would cause the circle of confusion to be infinitely large, capped
/// only by the `max_circle_of_confusion_diameter`. As that's unsightly,
/// this value can be used to essentially adjust how "far away" the skybox
/// or background are.
pub max_depth: f32,
}
/// Controls the appearance of the effect.
#[derive(Component, Clone, Copy, Default, PartialEq, Debug)]
pub enum DepthOfFieldMode {
/// A more accurate simulation, in which circles of confusion generate
/// "spots" of light.
///
/// For more information, see [Wikipedia's article on *bokeh*].
///
/// This is the default.
///
/// [Wikipedia's article on *bokeh*]: https://en.wikipedia.org/wiki/Bokeh
#[default]
Bokeh,
/// A faster simulation, in which out-of-focus areas are simply blurred.
///
/// This is less accurate to actual lens behavior and is generally less
/// aesthetically pleasing but requires less video memory bandwidth.
Gaussian,
}
/// Data about the depth of field effect that's uploaded to the GPU.
#[derive(Clone, Copy, Component, ShaderType)]
pub struct DepthOfFieldUniform {
/// The distance in meters to the location in focus.
focal_distance: f32,
/// The focal length. See the comment in `DepthOfFieldParams` in `dof.wgsl`
/// for more information.
focal_length: f32,
/// The premultiplied factor that we scale the circle of confusion by.
///
/// This is calculated as `focal_length² / (sensor_height *
/// aperture_f_stops)`.
coc_scale_factor: f32,
/// The maximum circle of confusion diameter in pixels. See the comment in
/// [`DepthOfFieldSettings`] for more information.
max_circle_of_confusion_diameter: f32,
/// The depth value that we clamp distant objects to. See the comment in
/// [`DepthOfFieldSettings`] for more information.
max_depth: f32,
/// Padding.
pad_a: u32,
/// Padding.
pad_b: u32,
/// Padding.
pad_c: u32,
}
/// A key that uniquely identifies depth of field pipelines.
#[derive(Clone, Copy, PartialEq, Eq, Hash)]
pub struct DepthOfFieldPipelineKey {
/// Whether we're doing Gaussian or bokeh blur.
pass: DofPass,
/// Whether we're using HDR.
hdr: bool,
/// Whether the render target is multisampled.
multisample: bool,
}
/// Identifies a specific depth of field render pass.
#[derive(Clone, Copy, PartialEq, Eq, Hash)]
enum DofPass {
/// The first, horizontal, Gaussian blur pass.
GaussianHorizontal,
/// The second, vertical, Gaussian blur pass.
GaussianVertical,
/// The first bokeh pass: vertical and diagonal.
BokehPass0,
/// The second bokeh pass: two diagonals.
BokehPass1,
}
impl Plugin for DepthOfFieldPlugin {
fn build(&self, app: &mut App) {
load_internal_asset!(app, DOF_SHADER_HANDLE, "dof.wgsl", Shader::from_wgsl);
app.add_plugins(UniformComponentPlugin::<DepthOfFieldUniform>::default());
let Some(render_app) = app.get_sub_app_mut(RenderApp) else {
return;
};
render_app
.init_resource::<SpecializedRenderPipelines<DepthOfFieldPipeline>>()
.init_resource::<DepthOfFieldGlobalBindGroup>()
.add_systems(ExtractSchedule, extract_depth_of_field_settings)
.add_systems(
Render,
(
configure_depth_of_field_view_targets,
prepare_auxiliary_depth_of_field_textures,
)
.after(prepare_view_targets)
.in_set(RenderSet::ManageViews),
)
.add_systems(
Render,
(
prepare_depth_of_field_view_bind_group_layouts,
prepare_depth_of_field_pipelines,
)
.chain()
.in_set(RenderSet::Prepare),
)
.add_systems(
Render,
prepare_depth_of_field_global_bind_group.in_set(RenderSet::PrepareBindGroups),
)
.add_render_graph_node::<ViewNodeRunner<DepthOfFieldNode>>(Core3d, Node3d::DepthOfField)
.add_render_graph_edges(
Core3d,
(Node3d::Bloom, Node3d::DepthOfField, Node3d::Tonemapping),
);
}
fn finish(&self, app: &mut App) {
let Some(render_app) = app.get_sub_app_mut(RenderApp) else {
return;
};
render_app.init_resource::<DepthOfFieldGlobalBindGroupLayout>();
}
}
/// The node in the render graph for depth of field.
#[derive(Default)]
pub struct DepthOfFieldNode;
/// The layout for the bind group shared among all invocations of the depth of
/// field shader.
#[derive(Resource, Clone)]
pub struct DepthOfFieldGlobalBindGroupLayout {
/// The layout.
layout: BindGroupLayout,
/// The sampler used to sample from the color buffer or buffers.
color_texture_sampler: Sampler,
}
/// The bind group shared among all invocations of the depth of field shader,
/// regardless of view.
#[derive(Resource, Default, Deref, DerefMut)]
pub struct DepthOfFieldGlobalBindGroup(Option<BindGroup>);
#[derive(Component)]
pub enum DepthOfFieldPipelines {
Gaussian {
horizontal: CachedRenderPipelineId,
vertical: CachedRenderPipelineId,
},
Bokeh {
pass_0: CachedRenderPipelineId,
pass_1: CachedRenderPipelineId,
},
}
struct DepthOfFieldPipelineRenderInfo {
pass_label: &'static str,
view_bind_group_label: &'static str,
pipeline: CachedRenderPipelineId,
is_dual_input: bool,
is_dual_output: bool,
}
/// The extra texture used as the second render target for the hexagonal bokeh
/// blur.
///
/// This is the same size and format as the main view target texture. It'll only
/// be present if bokeh is being used.
#[derive(Component, Deref, DerefMut)]
pub struct AuxiliaryDepthOfFieldTexture(CachedTexture);
/// Bind group layouts for depth of field specific to a single view.
#[derive(Component, Clone)]
pub struct ViewDepthOfFieldBindGroupLayouts {
/// The bind group layout for passes that take only one input.
single_input: BindGroupLayout,
/// The bind group layout for the second bokeh pass, which takes two inputs.
///
/// This will only be present if bokeh is in use.
dual_input: Option<BindGroupLayout>,
}
/// Information needed to specialize the pipeline corresponding to a pass of the
/// depth of field shader.
pub struct DepthOfFieldPipeline {
/// The bind group layouts specific to each view.
view_bind_group_layouts: ViewDepthOfFieldBindGroupLayouts,
/// The bind group layout shared among all invocations of the depth of field
/// shader.
global_bind_group_layout: BindGroupLayout,
}
impl ViewNode for DepthOfFieldNode {
type ViewQuery = (
Read<ViewUniformOffset>,
Read<ViewTarget>,
Read<ViewDepthTexture>,
Read<DepthOfFieldPipelines>,
Read<ViewDepthOfFieldBindGroupLayouts>,
Read<DynamicUniformIndex<DepthOfFieldUniform>>,
Option<Read<AuxiliaryDepthOfFieldTexture>>,
);
fn run<'w>(
&self,
_: &mut RenderGraphContext,
render_context: &mut RenderContext<'w>,
(
view_uniform_offset,
view_target,
view_depth_texture,
view_pipelines,
view_bind_group_layouts,
dof_settings_uniform_index,
auxiliary_dof_texture,
): QueryItem<'w, Self::ViewQuery>,
world: &'w World,
) -> Result<(), NodeRunError> {
let pipeline_cache = world.resource::<PipelineCache>();
let view_uniforms = world.resource::<ViewUniforms>();
let global_bind_group = world.resource::<DepthOfFieldGlobalBindGroup>();
// We can be in either Gaussian blur or bokeh mode here. Both modes are
// similar, consisting of two passes each. We factor out the information
// specific to each pass into
// [`DepthOfFieldPipelines::pipeline_render_info`].
for pipeline_render_info in view_pipelines.pipeline_render_info().iter() {
let (Some(render_pipeline), Some(view_uniforms_binding), Some(global_bind_group)) = (
pipeline_cache.get_render_pipeline(pipeline_render_info.pipeline),
view_uniforms.uniforms.binding(),
&**global_bind_group,
) else {
return Ok(());
};
// We use most of the postprocess infrastructure here. However,
// because the bokeh pass has an additional render target, we have
// to manage a secondary *auxiliary* texture alongside the textures
// managed by the postprocessing logic.
let postprocess = view_target.post_process_write();
let view_bind_group = if pipeline_render_info.is_dual_input {
let (Some(auxiliary_dof_texture), Some(dual_input_bind_group_layout)) = (
auxiliary_dof_texture,
view_bind_group_layouts.dual_input.as_ref(),
) else {
warn_once!("Should have created the auxiliary depth of field texture by now");
continue;
};
render_context.render_device().create_bind_group(
Some(pipeline_render_info.view_bind_group_label),
dual_input_bind_group_layout,
&BindGroupEntries::sequential((
view_uniforms_binding,
view_depth_texture.view(),
postprocess.source,
&auxiliary_dof_texture.default_view,
)),
)
} else {
render_context.render_device().create_bind_group(
Some(pipeline_render_info.view_bind_group_label),
&view_bind_group_layouts.single_input,
&BindGroupEntries::sequential((
view_uniforms_binding,
view_depth_texture.view(),
postprocess.source,
)),
)
};
// Push the first input attachment.
let mut color_attachments: SmallVec<[_; 2]> = SmallVec::new();
color_attachments.push(Some(RenderPassColorAttachment {
view: postprocess.destination,
resolve_target: None,
ops: Operations {
load: LoadOp::Clear(default()),
store: StoreOp::Store,
},
}));
// The first pass of the bokeh shader has two color outputs, not
// one. Handle this case by attaching the auxiliary texture, which
// should have been created by now in
// `prepare_auxiliary_depth_of_field_textures``.
if pipeline_render_info.is_dual_output {
let Some(auxiliary_dof_texture) = auxiliary_dof_texture else {
warn_once!("Should have created the auxiliary depth of field texture by now");
continue;
};
color_attachments.push(Some(RenderPassColorAttachment {
view: &auxiliary_dof_texture.default_view,
resolve_target: None,
ops: Operations {
load: LoadOp::Clear(default()),
store: StoreOp::Store,
},
}));
}
let render_pass_descriptor = RenderPassDescriptor {
label: Some(pipeline_render_info.pass_label),
color_attachments: &color_attachments,
..default()
};
let mut render_pass = render_context
.command_encoder()
.begin_render_pass(&render_pass_descriptor);
render_pass.set_pipeline(render_pipeline);
// Set the per-view bind group.
render_pass.set_bind_group(0, &view_bind_group, &[view_uniform_offset.offset]);
// Set the global bind group shared among all invocations of the shader.
render_pass.set_bind_group(1, global_bind_group, &[dof_settings_uniform_index.index()]);
// Render the full-screen pass.
render_pass.draw(0..3, 0..1);
}
Ok(())
}
}
impl Default for DepthOfFieldSettings {
fn default() -> Self {
let physical_camera_default = PhysicalCameraParameters::default();
Self {
focal_distance: 10.0,
aperture_f_stops: physical_camera_default.aperture_f_stops,
sensor_height: physical_camera_default.sensor_height,
max_circle_of_confusion_diameter: 64.0,
max_depth: INFINITY,
mode: DepthOfFieldMode::Bokeh,
}
}
}
impl DepthOfFieldSettings {
/// Initializes [`DepthOfFieldSettings`] from a set of
/// [`PhysicalCameraParameters`].
///
/// By passing the same [`PhysicalCameraParameters`] object to this function
/// and to [`bevy_render::camera::Exposure::from_physical_camera`], matching
/// results for both the exposure and depth of field effects can be
/// obtained.
///
/// All fields of the returned [`DepthOfFieldSettings`] other than
/// `focal_length` and `aperture_f_stops` are set to their default values.
pub fn from_physical_camera(camera: &PhysicalCameraParameters) -> DepthOfFieldSettings {
DepthOfFieldSettings {
sensor_height: camera.sensor_height,
aperture_f_stops: camera.aperture_f_stops,
..default()
}
}
}
impl FromWorld for DepthOfFieldGlobalBindGroupLayout {
fn from_world(world: &mut World) -> Self {
let render_device = world.resource::<RenderDevice>();
// Create the bind group layout that will be shared among all instances
// of the depth of field shader.
let layout = render_device.create_bind_group_layout(
Some("depth of field global bind group layout"),
&BindGroupLayoutEntries::sequential(
ShaderStages::FRAGMENT,
(
// `dof_params`
uniform_buffer::<DepthOfFieldUniform>(true),
// `color_texture_sampler`
sampler(SamplerBindingType::Filtering),
),
),
);
// Create the color texture sampler.
let sampler = render_device.create_sampler(&SamplerDescriptor {
label: Some("depth of field sampler"),
mag_filter: FilterMode::Linear,
min_filter: FilterMode::Linear,
..default()
});
DepthOfFieldGlobalBindGroupLayout {
color_texture_sampler: sampler,
layout,
}
}
}
/// Creates the bind group layouts for the depth of field effect that are
/// specific to each view.
pub fn prepare_depth_of_field_view_bind_group_layouts(
mut commands: Commands,
view_targets: Query<(Entity, &DepthOfFieldSettings)>,
msaa: Res<Msaa>,
render_device: Res<RenderDevice>,
) {
for (view, dof_settings) in view_targets.iter() {
// Create the bind group layout for the passes that take one input.
let single_input = render_device.create_bind_group_layout(
Some("depth of field bind group layout (single input)"),
&BindGroupLayoutEntries::sequential(
ShaderStages::FRAGMENT,
(
uniform_buffer::<ViewUniform>(true),
if *msaa != Msaa::Off {
texture_depth_2d_multisampled()
} else {
texture_depth_2d()
},
texture_2d(TextureSampleType::Float { filterable: true }),
),
),
);
// If needed, create the bind group layout for the second bokeh pass,
// which takes two inputs. We only need to do this if bokeh is in use.
let dual_input = match dof_settings.mode {
DepthOfFieldMode::Gaussian => None,
DepthOfFieldMode::Bokeh => Some(render_device.create_bind_group_layout(
Some("depth of field bind group layout (dual input)"),
&BindGroupLayoutEntries::sequential(
ShaderStages::FRAGMENT,
(
uniform_buffer::<ViewUniform>(true),
if *msaa != Msaa::Off {
texture_depth_2d_multisampled()
} else {
texture_depth_2d()
},
texture_2d(TextureSampleType::Float { filterable: true }),
texture_2d(TextureSampleType::Float { filterable: true }),
),
),
)),
};
commands
.entity(view)
.insert(ViewDepthOfFieldBindGroupLayouts {
single_input,
dual_input,
});
}
}
/// Configures depth textures so that the depth of field shader can read from
/// them.
///
/// By default, the depth buffers that Bevy creates aren't able to be bound as
/// textures. The depth of field shader, however, needs to read from them. So we
/// need to set the appropriate flag to tell Bevy to make samplable depth
/// buffers.
pub fn configure_depth_of_field_view_targets(
mut view_targets: Query<&mut Camera3d, With<DepthOfFieldSettings>>,
) {
for mut camera_3d in view_targets.iter_mut() {
let mut depth_texture_usages = TextureUsages::from(camera_3d.depth_texture_usages);
depth_texture_usages |= TextureUsages::TEXTURE_BINDING;
camera_3d.depth_texture_usages = depth_texture_usages.into();
}
}
/// Creates depth of field bind group 1, which is shared among all instances of
/// the depth of field shader.
pub fn prepare_depth_of_field_global_bind_group(
global_bind_group_layout: Res<DepthOfFieldGlobalBindGroupLayout>,
mut dof_bind_group: ResMut<DepthOfFieldGlobalBindGroup>,
dof_settings_uniforms: Res<ComponentUniforms<DepthOfFieldUniform>>,
render_device: Res<RenderDevice>,
) {
let Some(dof_settings_uniforms) = dof_settings_uniforms.binding() else {
return;
};
**dof_bind_group = Some(render_device.create_bind_group(
Some("depth of field global bind group"),
&global_bind_group_layout.layout,
&BindGroupEntries::sequential((
dof_settings_uniforms, // `dof_params`
&global_bind_group_layout.color_texture_sampler, // `color_texture_sampler`
)),
));
}
/// Creates the second render target texture that the first pass of the bokeh
/// effect needs.
pub fn prepare_auxiliary_depth_of_field_textures(
mut commands: Commands,
render_device: Res<RenderDevice>,
mut texture_cache: ResMut<TextureCache>,
mut view_targets: Query<(Entity, &ViewTarget, &DepthOfFieldSettings)>,
) {
for (entity, view_target, dof_settings) in view_targets.iter_mut() {
// An auxiliary texture is only needed for bokeh.
if dof_settings.mode != DepthOfFieldMode::Bokeh {
continue;
}
// The texture matches the main view target texture.
let texture_descriptor = TextureDescriptor {
label: Some("depth of field auxiliary texture"),
size: view_target.main_texture().size(),
mip_level_count: 1,
sample_count: view_target.main_texture().sample_count(),
dimension: TextureDimension::D2,
format: view_target.main_texture_format(),
usage: TextureUsages::RENDER_ATTACHMENT | TextureUsages::TEXTURE_BINDING,
view_formats: &[],
};
let texture = texture_cache.get(&render_device, texture_descriptor);
commands
.entity(entity)
.insert(AuxiliaryDepthOfFieldTexture(texture));
}
}
/// Specializes the depth of field pipelines specific to a view.
pub fn prepare_depth_of_field_pipelines(
mut commands: Commands,
pipeline_cache: Res<PipelineCache>,
mut pipelines: ResMut<SpecializedRenderPipelines<DepthOfFieldPipeline>>,
msaa: Res<Msaa>,
global_bind_group_layout: Res<DepthOfFieldGlobalBindGroupLayout>,
view_targets: Query<(
Entity,
&ExtractedView,
&DepthOfFieldSettings,
&ViewDepthOfFieldBindGroupLayouts,
)>,
) {
for (entity, view, dof_settings, view_bind_group_layouts) in view_targets.iter() {
let dof_pipeline = DepthOfFieldPipeline {
view_bind_group_layouts: view_bind_group_layouts.clone(),
global_bind_group_layout: global_bind_group_layout.layout.clone(),
};
// We'll need these two flags to create the `DepthOfFieldPipelineKey`s.
let (hdr, multisample) = (view.hdr, *msaa != Msaa::Off);
// Go ahead and specialize the pipelines.
match dof_settings.mode {
DepthOfFieldMode::Gaussian => {
commands
.entity(entity)
.insert(DepthOfFieldPipelines::Gaussian {
horizontal: pipelines.specialize(
&pipeline_cache,
&dof_pipeline,
DepthOfFieldPipelineKey {
hdr,
multisample,
pass: DofPass::GaussianHorizontal,
},
),
vertical: pipelines.specialize(
&pipeline_cache,
&dof_pipeline,
DepthOfFieldPipelineKey {
hdr,
multisample,
pass: DofPass::GaussianVertical,
},
),
});
}
DepthOfFieldMode::Bokeh => {
commands
.entity(entity)
.insert(DepthOfFieldPipelines::Bokeh {
pass_0: pipelines.specialize(
&pipeline_cache,
&dof_pipeline,
DepthOfFieldPipelineKey {
hdr,
multisample,
pass: DofPass::BokehPass0,
},
),
pass_1: pipelines.specialize(
&pipeline_cache,
&dof_pipeline,
DepthOfFieldPipelineKey {
hdr,
multisample,
pass: DofPass::BokehPass1,
},
),
});
}
}
}
}
impl SpecializedRenderPipeline for DepthOfFieldPipeline {
type Key = DepthOfFieldPipelineKey;
fn specialize(&self, key: Self::Key) -> RenderPipelineDescriptor {
// Build up our pipeline layout.
let (mut layout, mut shader_defs) = (vec![], vec![]);
let mut targets = vec![Some(ColorTargetState {
format: if key.hdr {
ViewTarget::TEXTURE_FORMAT_HDR
} else {
TextureFormat::bevy_default()
},
blend: None,
write_mask: ColorWrites::ALL,
})];
// Select bind group 0, the view-specific bind group.
match key.pass {
DofPass::GaussianHorizontal | DofPass::GaussianVertical => {
// Gaussian blurs take only a single input and output.
layout.push(self.view_bind_group_layouts.single_input.clone());
}
DofPass::BokehPass0 => {
// The first bokeh pass takes one input and produces two outputs.
layout.push(self.view_bind_group_layouts.single_input.clone());
targets.push(targets[0].clone());
}
DofPass::BokehPass1 => {
// The second bokeh pass takes the two outputs from the first
// bokeh pass and produces a single output.
let dual_input_bind_group_layout = self
.view_bind_group_layouts
.dual_input
.as_ref()
.expect("Dual-input depth of field bind group should have been created by now")
.clone();
layout.push(dual_input_bind_group_layout);
shader_defs.push("DUAL_INPUT".into());
}
}
// Add bind group 1, the global bind group.
layout.push(self.global_bind_group_layout.clone());
if key.multisample {
shader_defs.push("MULTISAMPLED".into());
}
RenderPipelineDescriptor {
label: Some("depth of field pipeline".into()),
layout,
push_constant_ranges: vec![],
vertex: fullscreen_shader_vertex_state(),
primitive: default(),
depth_stencil: None,
multisample: default(),
fragment: Some(FragmentState {
shader: DOF_SHADER_HANDLE,
shader_defs,
entry_point: match key.pass {
DofPass::GaussianHorizontal => "gaussian_horizontal".into(),
DofPass::GaussianVertical => "gaussian_vertical".into(),
DofPass::BokehPass0 => "bokeh_pass_0".into(),
DofPass::BokehPass1 => "bokeh_pass_1".into(),
},
targets,
}),
}
}
}
/// Extracts all [`DepthOfFieldSettings`] components into the render world.
fn extract_depth_of_field_settings(
mut commands: Commands,
msaa: Extract<Res<Msaa>>,
mut query: Extract<Query<(Entity, &DepthOfFieldSettings, &Projection)>>,
) {
if **msaa != Msaa::Off && !depth_textures_are_supported() {
info_once!(
"Disabling depth of field on this platform because depth textures aren't available"
);
return;
}
for (entity, dof_settings, projection) in query.iter_mut() {
// Depth of field is nonsensical without a perspective projection.
let Projection::Perspective(ref perspective_projection) = *projection else {
continue;
};
let focal_length =
calculate_focal_length(dof_settings.sensor_height, perspective_projection.fov);
// Convert `DepthOfFieldSettings` to `DepthOfFieldUniform`.
commands.get_or_spawn(entity).insert((
*dof_settings,
DepthOfFieldUniform {
focal_distance: dof_settings.focal_distance,
focal_length,
coc_scale_factor: focal_length * focal_length
/ (dof_settings.sensor_height * dof_settings.aperture_f_stops),
max_circle_of_confusion_diameter: dof_settings.max_circle_of_confusion_diameter,
max_depth: dof_settings.max_depth,
pad_a: 0,
pad_b: 0,
pad_c: 0,
},
));
}
}
/// Given the sensor height and the FOV, returns the focal length.
///
/// See <https://photo.stackexchange.com/a/97218>.
pub fn calculate_focal_length(sensor_height: f32, fov: f32) -> f32 {
0.5 * sensor_height / f32::tan(0.5 * fov)
}
impl DepthOfFieldPipelines {
/// Populates the information that the `DepthOfFieldNode` needs for the two
/// depth of field render passes.
fn pipeline_render_info(&self) -> [DepthOfFieldPipelineRenderInfo; 2] {
match *self {
DepthOfFieldPipelines::Gaussian {
horizontal: horizontal_pipeline,
vertical: vertical_pipeline,
} => [
DepthOfFieldPipelineRenderInfo {
pass_label: "depth of field pass (horizontal Gaussian)",
view_bind_group_label: "depth of field view bind group (horizontal Gaussian)",
pipeline: horizontal_pipeline,
is_dual_input: false,
is_dual_output: false,
},
DepthOfFieldPipelineRenderInfo {
pass_label: "depth of field pass (vertical Gaussian)",
view_bind_group_label: "depth of field view bind group (vertical Gaussian)",
pipeline: vertical_pipeline,
is_dual_input: false,
is_dual_output: false,
},
],
DepthOfFieldPipelines::Bokeh {
pass_0: pass_0_pipeline,
pass_1: pass_1_pipeline,
} => [
DepthOfFieldPipelineRenderInfo {
pass_label: "depth of field pass (bokeh pass 0)",
view_bind_group_label: "depth of field view bind group (bokeh pass 0)",
pipeline: pass_0_pipeline,
is_dual_input: false,
is_dual_output: true,
},
DepthOfFieldPipelineRenderInfo {
pass_label: "depth of field pass (bokeh pass 1)",
view_bind_group_label: "depth of field view bind group (bokeh pass 1)",
pipeline: pass_1_pipeline,
is_dual_input: true,
is_dual_output: false,
},
],
}
}
}
/// Returns true if multisampled depth textures are supported on this platform.
///
/// In theory, Naga supports depth textures on WebGL 2. In practice, it doesn't,
/// because of a silly bug whereby Naga assumes that all depth textures are
/// `sampler2DShadow` and will cheerfully generate invalid GLSL that tries to
/// perform non-percentage-closer-filtering with such a sampler. Therefore we
/// disable depth of field entirely on WebGL 2.
#[cfg(target_arch = "wasm32")]
fn depth_textures_are_supported() -> bool {
false
}
/// Returns true if multisampled depth textures are supported on this platform.
///
/// In theory, Naga supports depth textures on WebGL 2. In practice, it doesn't,
/// because of a silly bug whereby Naga assumes that all depth textures are
/// `sampler2DShadow` and will cheerfully generate invalid GLSL that tries to
/// perform non-percentage-closer-filtering with such a sampler. Therefore we
/// disable depth of field entirely on WebGL 2.
#[cfg(not(target_arch = "wasm32"))]
fn depth_textures_are_supported() -> bool {
true
}

View file

@ -14,6 +14,7 @@ pub mod contrast_adaptive_sharpening;
pub mod core_2d;
pub mod core_3d;
pub mod deferred;
pub mod dof;
pub mod fullscreen_vertex_shader;
pub mod fxaa;
pub mod motion_blur;
@ -53,6 +54,7 @@ use crate::{
core_2d::Core2dPlugin,
core_3d::Core3dPlugin,
deferred::copy_lighting_id::CopyDeferredLightingIdPlugin,
dof::DepthOfFieldPlugin,
fullscreen_vertex_shader::FULLSCREEN_SHADER_HANDLE,
fxaa::FxaaPlugin,
motion_blur::MotionBlurPlugin,
@ -93,6 +95,7 @@ impl Plugin for CorePipelinePlugin {
FxaaPlugin,
CASPlugin,
MotionBlurPlugin,
DepthOfFieldPlugin,
));
}
}

View file

@ -146,8 +146,8 @@ impl Default for Exposure {
}
}
/// Parameters based on physical camera characteristics for calculating
/// EV100 values for use with [`Exposure`].
/// Parameters based on physical camera characteristics for calculating EV100
/// values for use with [`Exposure`]. This is also used for depth of field.
#[derive(Clone, Copy)]
pub struct PhysicalCameraParameters {
/// <https://en.wikipedia.org/wiki/F-number>
@ -156,6 +156,15 @@ pub struct PhysicalCameraParameters {
pub shutter_speed_s: f32,
/// <https://en.wikipedia.org/wiki/Film_speed>
pub sensitivity_iso: f32,
/// The height of the [image sensor format] in meters.
///
/// Focal length is derived from the FOV and this value. The default is
/// 18.66mm, matching the [Super 35] format, which is popular in cinema.
///
/// [image sensor format]: https://en.wikipedia.org/wiki/Image_sensor_format
///
/// [Super 35]: https://en.wikipedia.org/wiki/Super_35
pub sensor_height: f32,
}
impl PhysicalCameraParameters {
@ -173,6 +182,7 @@ impl Default for PhysicalCameraParameters {
aperture_f_stops: 1.0,
shutter_speed_s: 1.0 / 125.0,
sensitivity_iso: 100.0,
sensor_height: 0.01866,
}
}
}

View file

@ -0,0 +1,278 @@
//! Demonstrates depth of field (DOF).
//!
//! The depth of field effect simulates the blur that a real camera produces on
//! objects that are out of focus.
//!
//! The test scene is inspired by [a blog post on depth of field in Unity].
//! However, the technique used in Bevy has little to do with that blog post,
//! and all the assets are original.
//!
//! [a blog post on depth of field in Unity]: https://catlikecoding.com/unity/tutorials/advanced-rendering/depth-of-field/
use bevy::{
core_pipeline::{
bloom::BloomSettings,
dof::{self, DepthOfFieldMode, DepthOfFieldSettings},
tonemapping::Tonemapping,
},
pbr::Lightmap,
prelude::*,
render::camera::PhysicalCameraParameters,
};
/// The increments in which the user can adjust the focal distance, in meters
/// per frame.
const FOCAL_DISTANCE_SPEED: f32 = 0.05;
/// The increments in which the user can adjust the f-number, in units per frame.
const APERTURE_F_STOP_SPEED: f32 = 0.01;
/// The minimum distance that we allow the user to focus on.
const MIN_FOCAL_DISTANCE: f32 = 0.01;
/// The minimum f-number that we allow the user to set.
const MIN_APERTURE_F_STOPS: f32 = 0.05;
/// A resource that stores the settings that the user can change.
#[derive(Clone, Copy, Resource)]
struct AppSettings {
/// The distance from the camera to the area in the most focus.
focal_distance: f32,
/// The [f-number]. Lower numbers cause objects outside the focal distance
/// to be blurred more.
///
/// [f-number]: https://en.wikipedia.org/wiki/F-number
aperture_f_stops: f32,
/// Whether depth of field is on, and, if so, whether we're in Gaussian or
/// bokeh mode.
mode: Option<DepthOfFieldMode>,
}
fn main() {
App::new()
.init_resource::<AppSettings>()
.add_plugins(DefaultPlugins.set(WindowPlugin {
primary_window: Some(Window {
title: "Bevy Depth of Field Example".to_string(),
..default()
}),
..default()
}))
.add_systems(Startup, setup)
.add_systems(Update, tweak_scene)
.add_systems(
Update,
(adjust_focus, change_mode, update_dof_settings, update_text).chain(),
)
.run();
}
fn setup(mut commands: Commands, asset_server: Res<AssetServer>, app_settings: Res<AppSettings>) {
// Spawn the camera. Enable HDR and bloom, as that highlights the depth of
// field effect.
let mut camera = commands.spawn(Camera3dBundle {
transform: Transform::from_xyz(0.0, 4.5, 8.25).looking_at(Vec3::ZERO, Vec3::Y),
camera: Camera {
hdr: true,
..default()
},
tonemapping: Tonemapping::TonyMcMapface,
..default()
});
camera.insert(BloomSettings::NATURAL);
// Insert the depth of field settings.
if let Some(dof_settings) = Option::<DepthOfFieldSettings>::from(*app_settings) {
camera.insert(dof_settings);
}
// Spawn the scene.
commands.spawn(SceneBundle {
scene: asset_server.load("models/DepthOfFieldExample/DepthOfFieldExample.glb#Scene0"),
..default()
});
// Spawn the help text.
commands.spawn(
TextBundle {
text: create_text(&asset_server, &app_settings),
..TextBundle::default()
}
.with_style(Style {
position_type: PositionType::Absolute,
bottom: Val::Px(10.0),
left: Val::Px(10.0),
..default()
}),
);
}
/// Adjusts the focal distance and f-number per user inputs.
fn adjust_focus(input: Res<ButtonInput<KeyCode>>, mut app_settings: ResMut<AppSettings>) {
// Change the focal distance if the user requested.
let distance_delta = if input.pressed(KeyCode::ArrowDown) {
-FOCAL_DISTANCE_SPEED
} else if input.pressed(KeyCode::ArrowUp) {
FOCAL_DISTANCE_SPEED
} else {
0.0
};
// Change the f-number if the user requested.
let f_stop_delta = if input.pressed(KeyCode::ArrowLeft) {
-APERTURE_F_STOP_SPEED
} else if input.pressed(KeyCode::ArrowRight) {
APERTURE_F_STOP_SPEED
} else {
0.0
};
app_settings.focal_distance =
(app_settings.focal_distance + distance_delta).max(MIN_FOCAL_DISTANCE);
app_settings.aperture_f_stops =
(app_settings.aperture_f_stops + f_stop_delta).max(MIN_APERTURE_F_STOPS);
}
/// Changes the depth of field mode (Gaussian, bokeh, off) per user inputs.
fn change_mode(input: Res<ButtonInput<KeyCode>>, mut app_settings: ResMut<AppSettings>) {
if !input.just_pressed(KeyCode::Space) {
return;
}
app_settings.mode = match app_settings.mode {
Some(DepthOfFieldMode::Bokeh) => Some(DepthOfFieldMode::Gaussian),
Some(DepthOfFieldMode::Gaussian) => None,
None => Some(DepthOfFieldMode::Bokeh),
}
}
impl Default for AppSettings {
fn default() -> Self {
Self {
// Objects 7 meters away will be in full focus.
focal_distance: 7.0,
// Set a nice blur level.
//
// This is a really low F-number, but we want to demonstrate the
// effect, even if it's kind of unrealistic.
aperture_f_stops: 1.0 / 8.0,
// Turn on bokeh by default, as it's the nicest-looking technique.
mode: Some(DepthOfFieldMode::Bokeh),
}
}
}
/// Writes the depth of field settings into the camera.
fn update_dof_settings(
mut commands: Commands,
view_targets: Query<Entity, With<Camera>>,
app_settings: Res<AppSettings>,
) {
let dof_settings: Option<DepthOfFieldSettings> = (*app_settings).into();
for view in view_targets.iter() {
match dof_settings {
None => {
commands.entity(view).remove::<DepthOfFieldSettings>();
}
Some(dof_settings) => {
commands.entity(view).insert(dof_settings);
}
}
}
}
/// Makes one-time adjustments to the scene that can't be encoded in glTF.
fn tweak_scene(
mut commands: Commands,
asset_server: Res<AssetServer>,
mut materials: ResMut<Assets<StandardMaterial>>,
mut lights: Query<&mut DirectionalLight, Changed<DirectionalLight>>,
mut named_entities: Query<
(Entity, &Name, &Handle<StandardMaterial>),
(With<Handle<Mesh>>, Without<Lightmap>),
>,
) {
// Turn on shadows.
for mut light in lights.iter_mut() {
light.shadows_enabled = true;
}
// Add a nice lightmap to the circuit board.
for (entity, name, material) in named_entities.iter_mut() {
if &**name == "CircuitBoard" {
materials.get_mut(material).unwrap().lightmap_exposure = 10000.0;
commands.entity(entity).insert(Lightmap {
image: asset_server.load("models/DepthOfFieldExample/CircuitBoardLightmap.hdr"),
..default()
});
}
}
}
/// Update the help text entity per the current app settings.
fn update_text(
mut texts: Query<&mut Text>,
asset_server: Res<AssetServer>,
app_settings: Res<AppSettings>,
) {
for mut text in texts.iter_mut() {
*text = create_text(&asset_server, &app_settings);
}
}
/// Regenerates the app text component per the current app settings.
fn create_text(asset_server: &AssetServer, app_settings: &AppSettings) -> Text {
Text::from_section(
app_settings.help_text(),
TextStyle {
font: asset_server.load("fonts/FiraMono-Medium.ttf"),
font_size: 24.0,
..default()
},
)
}
impl From<AppSettings> for Option<DepthOfFieldSettings> {
fn from(app_settings: AppSettings) -> Self {
app_settings.mode.map(|mode| DepthOfFieldSettings {
mode,
focal_distance: app_settings.focal_distance,
aperture_f_stops: app_settings.aperture_f_stops,
max_depth: 14.0,
..default()
})
}
}
impl AppSettings {
/// Builds the help text.
fn help_text(&self) -> String {
let Some(mode) = self.mode else {
return "Mode: Off (Press Space to change)".to_owned();
};
// We leave these as their defaults, so we don't need to store them in
// the app settings and can just fetch them from the default camera
// parameters.
let sensor_height = PhysicalCameraParameters::default().sensor_height;
let fov = PerspectiveProjection::default().fov;
format!(
"Focal distance: {} m (Press Up/Down to change)
Aperture F-stops: f/{} (Press Left/Right to change)
Sensor height: {}mm
Focal length: {}mm
Mode: {} (Press Space to change)",
self.focal_distance,
self.aperture_f_stops,
sensor_height * 1000.0,
dof::calculate_focal_length(sensor_height, fov) * 1000.0,
match mode {
DepthOfFieldMode::Bokeh => "Bokeh",
DepthOfFieldMode::Gaussian => "Gaussian",
}
)
}
}

View file

@ -17,6 +17,7 @@ fn main() {
aperture_f_stops: 1.0,
shutter_speed_s: 1.0 / 125.0,
sensitivity_iso: 100.0,
sensor_height: 0.01866,
}))
.add_systems(Startup, setup)
.add_systems(Update, (update_exposure, movement, animate_light_direction))

View file

@ -134,6 +134,7 @@ Example | Description
[Clearcoat](../examples/3d/clearcoat.rs) | Demonstrates the clearcoat PBR feature
[Color grading](../examples/3d/color_grading.rs) | Demonstrates color grading
[Deferred Rendering](../examples/3d/deferred_rendering.rs) | Renders meshes with both forward and deferred pipelines
[Depth of field](../examples/3d/depth_of_field.rs) | Demonstrates depth of field
[Fog](../examples/3d/fog.rs) | A scene showcasing the distance fog effect
[Generate Custom Mesh](../examples/3d/generate_custom_mesh.rs) | Simple showcase of how to generate a custom mesh with a custom texture
[Irradiance Volumes](../examples/3d/irradiance_volumes.rs) | Demonstrates irradiance volumes