Meshlet software raster + start of cleanup (#14623)
# Objective
- Faster meshlet rasterization path for small triangles
- Avoid having to allocate and write out a triangle buffer
- Refactor gpu_scene.rs
## Solution
- Replace the 32bit visbuffer texture with a 64bit visbuffer buffer,
where the left 32 bits encode depth, and the right 32 bits encode the
existing cluster + triangle IDs. Can't use 64bit textures, wgpu/naga
doesn't support atomic ops on textures yet.
- Instead of writing out a buffer of packed cluster + triangle IDs (per
triangle) to raster, the culling pass now writes out a buffer of just
cluster IDs (per cluster, so less memory allocated, cheaper to write
out).
- Clusters for software raster are allocated from the left side
- Clusters for hardware raster are allocated in the same buffer, from
the right side
- The buffer size is fixed at MeshletPlugin build time, and should be
set to a reasonable value for your scene (no warning on overflow, and no
good way to determine what value you need outside of renderdoc - I plan
to fix this in a future PR adding a meshlet stats overlay)
- Currently I don't have a heuristic for software vs hardware raster
selection for each cluster. The existing code is just a placeholder. I
need to profile on a release scene and come up with a heuristic,
probably in a future PR.
- The culling shader is getting pretty hard to follow at this point, but
I don't want to spend time improving it as the entire shader/pass is
getting rewritten/replaced in the near future.
- Software raster is a compute workgroup per-cluster. Each workgroup
loads and transforms the <=64 vertices of the cluster, and then
rasterizes the <=64 triangles of the cluster.
- Two variants are implemented: Scanline for clusters with any larger
triangles (still smaller than hardware is good at), and brute-force for
very very tiny triangles
- Once the shader determines that a pixel should be filled in, it does
an atomicMax() on the visbuffer to store the results, copying how Nanite
works
- On devices with a low max workgroups per dispatch limit, an extra
compute pass is inserted before software raster to convert from a 1d to
2d dispatch (I don't think 3d would ever be necessary).
- I haven't implemented the top-left rule or subpixel precision yet, I'm
leaving that for a future PR since I get usable results without it for
now
- Resources used:
https://kristoffer-dyrkorn.github.io/triangle-rasterizer and chapters
6-8 of
https://fgiesen.wordpress.com/2013/02/17/optimizing-sw-occlusion-culling-index
- Hardware raster now spawns 64*3 vertex invocations per meshlet,
instead of the actual meshlet vertex count. Extra invocations just
early-exit.
- While this is slower than the existing system, hardware draws should
be rare now that software raster is usable, and it saves a ton of memory
using the unified cluster ID buffer. This would be fixed if wgpu had
support for mesh shaders.
- Instead of writing to a color+depth attachment, the hardware raster
pass also does the same atomic visbuffer writes that software raster
uses.
- We have to bind a dummy render target anyways, as wgpu doesn't
currently support render passes without any attachments
- Material IDs are no longer written out during the main rasterization
passes.
- If we had async compute queues, we could overlap the software and
hardware raster passes.
- New material and depth resolve passes run at the end of the visbuffer
node, and write out view depth and material ID depth textures
### Misc changes
- Fixed cluster culling importing, but never actually using the previous
view uniforms when doing occlusion culling
- Fixed incorrectly adding the LOD error twice when building the meshlet
mesh
- Splitup gpu_scene module into meshlet_mesh_manager, instance_manager,
and resource_manager
- resource_manager is still too complex and inefficient (extract and
prepare are way too expensive). I plan on improving this in a future PR,
but for now ResourceManager is mostly a 1:1 port of the leftover
MeshletGpuScene bits.
- Material draw passes have been renamed to the more accurate material
shade pass, as well as some other misc renaming (in the future, these
will be compute shaders even, and not actual draw calls)
---
## Migration Guide
- TBD (ask me at the end of the release for meshlet changes as a whole)
---------
Co-authored-by: vero <email@atlasdostal.com>
2024-08-26 17:54:34 +00:00
|
|
|
use super::{
|
|
|
|
asset::{Meshlet, MeshletBoundingSpheres},
|
|
|
|
persistent_buffer::PersistentGpuBuffer,
|
|
|
|
MeshletMesh,
|
|
|
|
};
|
2024-09-27 00:59:59 +00:00
|
|
|
use alloc::sync::Arc;
|
Meshlet software raster + start of cleanup (#14623)
# Objective
- Faster meshlet rasterization path for small triangles
- Avoid having to allocate and write out a triangle buffer
- Refactor gpu_scene.rs
## Solution
- Replace the 32bit visbuffer texture with a 64bit visbuffer buffer,
where the left 32 bits encode depth, and the right 32 bits encode the
existing cluster + triangle IDs. Can't use 64bit textures, wgpu/naga
doesn't support atomic ops on textures yet.
- Instead of writing out a buffer of packed cluster + triangle IDs (per
triangle) to raster, the culling pass now writes out a buffer of just
cluster IDs (per cluster, so less memory allocated, cheaper to write
out).
- Clusters for software raster are allocated from the left side
- Clusters for hardware raster are allocated in the same buffer, from
the right side
- The buffer size is fixed at MeshletPlugin build time, and should be
set to a reasonable value for your scene (no warning on overflow, and no
good way to determine what value you need outside of renderdoc - I plan
to fix this in a future PR adding a meshlet stats overlay)
- Currently I don't have a heuristic for software vs hardware raster
selection for each cluster. The existing code is just a placeholder. I
need to profile on a release scene and come up with a heuristic,
probably in a future PR.
- The culling shader is getting pretty hard to follow at this point, but
I don't want to spend time improving it as the entire shader/pass is
getting rewritten/replaced in the near future.
- Software raster is a compute workgroup per-cluster. Each workgroup
loads and transforms the <=64 vertices of the cluster, and then
rasterizes the <=64 triangles of the cluster.
- Two variants are implemented: Scanline for clusters with any larger
triangles (still smaller than hardware is good at), and brute-force for
very very tiny triangles
- Once the shader determines that a pixel should be filled in, it does
an atomicMax() on the visbuffer to store the results, copying how Nanite
works
- On devices with a low max workgroups per dispatch limit, an extra
compute pass is inserted before software raster to convert from a 1d to
2d dispatch (I don't think 3d would ever be necessary).
- I haven't implemented the top-left rule or subpixel precision yet, I'm
leaving that for a future PR since I get usable results without it for
now
- Resources used:
https://kristoffer-dyrkorn.github.io/triangle-rasterizer and chapters
6-8 of
https://fgiesen.wordpress.com/2013/02/17/optimizing-sw-occlusion-culling-index
- Hardware raster now spawns 64*3 vertex invocations per meshlet,
instead of the actual meshlet vertex count. Extra invocations just
early-exit.
- While this is slower than the existing system, hardware draws should
be rare now that software raster is usable, and it saves a ton of memory
using the unified cluster ID buffer. This would be fixed if wgpu had
support for mesh shaders.
- Instead of writing to a color+depth attachment, the hardware raster
pass also does the same atomic visbuffer writes that software raster
uses.
- We have to bind a dummy render target anyways, as wgpu doesn't
currently support render passes without any attachments
- Material IDs are no longer written out during the main rasterization
passes.
- If we had async compute queues, we could overlap the software and
hardware raster passes.
- New material and depth resolve passes run at the end of the visbuffer
node, and write out view depth and material ID depth textures
### Misc changes
- Fixed cluster culling importing, but never actually using the previous
view uniforms when doing occlusion culling
- Fixed incorrectly adding the LOD error twice when building the meshlet
mesh
- Splitup gpu_scene module into meshlet_mesh_manager, instance_manager,
and resource_manager
- resource_manager is still too complex and inefficient (extract and
prepare are way too expensive). I plan on improving this in a future PR,
but for now ResourceManager is mostly a 1:1 port of the leftover
MeshletGpuScene bits.
- Material draw passes have been renamed to the more accurate material
shade pass, as well as some other misc renaming (in the future, these
will be compute shaders even, and not actual draw calls)
---
## Migration Guide
- TBD (ask me at the end of the release for meshlet changes as a whole)
---------
Co-authored-by: vero <email@atlasdostal.com>
2024-08-26 17:54:34 +00:00
|
|
|
use bevy_asset::{AssetId, Assets};
|
|
|
|
use bevy_ecs::{
|
|
|
|
system::{Res, ResMut, Resource},
|
|
|
|
world::{FromWorld, World},
|
|
|
|
};
|
|
|
|
use bevy_render::{
|
|
|
|
render_resource::BufferAddress,
|
|
|
|
renderer::{RenderDevice, RenderQueue},
|
|
|
|
};
|
|
|
|
use bevy_utils::HashMap;
|
2024-09-27 00:59:59 +00:00
|
|
|
use core::ops::Range;
|
Meshlet software raster + start of cleanup (#14623)
# Objective
- Faster meshlet rasterization path for small triangles
- Avoid having to allocate and write out a triangle buffer
- Refactor gpu_scene.rs
## Solution
- Replace the 32bit visbuffer texture with a 64bit visbuffer buffer,
where the left 32 bits encode depth, and the right 32 bits encode the
existing cluster + triangle IDs. Can't use 64bit textures, wgpu/naga
doesn't support atomic ops on textures yet.
- Instead of writing out a buffer of packed cluster + triangle IDs (per
triangle) to raster, the culling pass now writes out a buffer of just
cluster IDs (per cluster, so less memory allocated, cheaper to write
out).
- Clusters for software raster are allocated from the left side
- Clusters for hardware raster are allocated in the same buffer, from
the right side
- The buffer size is fixed at MeshletPlugin build time, and should be
set to a reasonable value for your scene (no warning on overflow, and no
good way to determine what value you need outside of renderdoc - I plan
to fix this in a future PR adding a meshlet stats overlay)
- Currently I don't have a heuristic for software vs hardware raster
selection for each cluster. The existing code is just a placeholder. I
need to profile on a release scene and come up with a heuristic,
probably in a future PR.
- The culling shader is getting pretty hard to follow at this point, but
I don't want to spend time improving it as the entire shader/pass is
getting rewritten/replaced in the near future.
- Software raster is a compute workgroup per-cluster. Each workgroup
loads and transforms the <=64 vertices of the cluster, and then
rasterizes the <=64 triangles of the cluster.
- Two variants are implemented: Scanline for clusters with any larger
triangles (still smaller than hardware is good at), and brute-force for
very very tiny triangles
- Once the shader determines that a pixel should be filled in, it does
an atomicMax() on the visbuffer to store the results, copying how Nanite
works
- On devices with a low max workgroups per dispatch limit, an extra
compute pass is inserted before software raster to convert from a 1d to
2d dispatch (I don't think 3d would ever be necessary).
- I haven't implemented the top-left rule or subpixel precision yet, I'm
leaving that for a future PR since I get usable results without it for
now
- Resources used:
https://kristoffer-dyrkorn.github.io/triangle-rasterizer and chapters
6-8 of
https://fgiesen.wordpress.com/2013/02/17/optimizing-sw-occlusion-culling-index
- Hardware raster now spawns 64*3 vertex invocations per meshlet,
instead of the actual meshlet vertex count. Extra invocations just
early-exit.
- While this is slower than the existing system, hardware draws should
be rare now that software raster is usable, and it saves a ton of memory
using the unified cluster ID buffer. This would be fixed if wgpu had
support for mesh shaders.
- Instead of writing to a color+depth attachment, the hardware raster
pass also does the same atomic visbuffer writes that software raster
uses.
- We have to bind a dummy render target anyways, as wgpu doesn't
currently support render passes without any attachments
- Material IDs are no longer written out during the main rasterization
passes.
- If we had async compute queues, we could overlap the software and
hardware raster passes.
- New material and depth resolve passes run at the end of the visbuffer
node, and write out view depth and material ID depth textures
### Misc changes
- Fixed cluster culling importing, but never actually using the previous
view uniforms when doing occlusion culling
- Fixed incorrectly adding the LOD error twice when building the meshlet
mesh
- Splitup gpu_scene module into meshlet_mesh_manager, instance_manager,
and resource_manager
- resource_manager is still too complex and inefficient (extract and
prepare are way too expensive). I plan on improving this in a future PR,
but for now ResourceManager is mostly a 1:1 port of the leftover
MeshletGpuScene bits.
- Material draw passes have been renamed to the more accurate material
shade pass, as well as some other misc renaming (in the future, these
will be compute shaders even, and not actual draw calls)
---
## Migration Guide
- TBD (ask me at the end of the release for meshlet changes as a whole)
---------
Co-authored-by: vero <email@atlasdostal.com>
2024-08-26 17:54:34 +00:00
|
|
|
|
|
|
|
/// Manages uploading [`MeshletMesh`] asset data to the GPU.
|
|
|
|
#[derive(Resource)]
|
|
|
|
pub struct MeshletMeshManager {
|
|
|
|
pub vertex_data: PersistentGpuBuffer<Arc<[u8]>>,
|
|
|
|
pub vertex_ids: PersistentGpuBuffer<Arc<[u32]>>,
|
|
|
|
pub indices: PersistentGpuBuffer<Arc<[u8]>>,
|
|
|
|
pub meshlets: PersistentGpuBuffer<Arc<[Meshlet]>>,
|
|
|
|
pub meshlet_bounding_spheres: PersistentGpuBuffer<Arc<[MeshletBoundingSpheres]>>,
|
|
|
|
meshlet_mesh_slices: HashMap<AssetId<MeshletMesh>, [Range<BufferAddress>; 5]>,
|
|
|
|
}
|
|
|
|
|
|
|
|
impl FromWorld for MeshletMeshManager {
|
|
|
|
fn from_world(world: &mut World) -> Self {
|
|
|
|
let render_device = world.resource::<RenderDevice>();
|
|
|
|
Self {
|
|
|
|
vertex_data: PersistentGpuBuffer::new("meshlet_vertex_data", render_device),
|
|
|
|
vertex_ids: PersistentGpuBuffer::new("meshlet_vertex_ids", render_device),
|
|
|
|
indices: PersistentGpuBuffer::new("meshlet_indices", render_device),
|
|
|
|
meshlets: PersistentGpuBuffer::new("meshlets", render_device),
|
|
|
|
meshlet_bounding_spheres: PersistentGpuBuffer::new(
|
|
|
|
"meshlet_bounding_spheres",
|
|
|
|
render_device,
|
|
|
|
),
|
|
|
|
meshlet_mesh_slices: HashMap::new(),
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
impl MeshletMeshManager {
|
|
|
|
pub fn queue_upload_if_needed(
|
|
|
|
&mut self,
|
|
|
|
asset_id: AssetId<MeshletMesh>,
|
|
|
|
assets: &mut Assets<MeshletMesh>,
|
|
|
|
) -> Range<u32> {
|
|
|
|
let queue_meshlet_mesh = |asset_id: &AssetId<MeshletMesh>| {
|
|
|
|
let meshlet_mesh = assets.remove_untracked(*asset_id).expect(
|
|
|
|
"MeshletMesh asset was already unloaded but is not registered with MeshletMeshManager",
|
|
|
|
);
|
|
|
|
|
|
|
|
let vertex_data_slice = self
|
|
|
|
.vertex_data
|
|
|
|
.queue_write(Arc::clone(&meshlet_mesh.vertex_data), ());
|
|
|
|
let vertex_ids_slice = self.vertex_ids.queue_write(
|
|
|
|
Arc::clone(&meshlet_mesh.vertex_ids),
|
|
|
|
vertex_data_slice.start,
|
|
|
|
);
|
|
|
|
let indices_slice = self
|
|
|
|
.indices
|
|
|
|
.queue_write(Arc::clone(&meshlet_mesh.indices), ());
|
|
|
|
let meshlets_slice = self.meshlets.queue_write(
|
|
|
|
Arc::clone(&meshlet_mesh.meshlets),
|
|
|
|
(vertex_ids_slice.start, indices_slice.start),
|
|
|
|
);
|
|
|
|
let meshlet_bounding_spheres_slice = self
|
|
|
|
.meshlet_bounding_spheres
|
|
|
|
.queue_write(Arc::clone(&meshlet_mesh.bounding_spheres), ());
|
|
|
|
|
|
|
|
[
|
|
|
|
vertex_data_slice,
|
|
|
|
vertex_ids_slice,
|
|
|
|
indices_slice,
|
|
|
|
meshlets_slice,
|
|
|
|
meshlet_bounding_spheres_slice,
|
|
|
|
]
|
|
|
|
};
|
|
|
|
|
|
|
|
// If the MeshletMesh asset has not been uploaded to the GPU yet, queue it for uploading
|
|
|
|
let [_, _, _, meshlets_slice, _] = self
|
|
|
|
.meshlet_mesh_slices
|
|
|
|
.entry(asset_id)
|
|
|
|
.or_insert_with_key(queue_meshlet_mesh)
|
|
|
|
.clone();
|
|
|
|
|
|
|
|
let meshlets_slice_start = meshlets_slice.start as u32 / size_of::<Meshlet>() as u32;
|
|
|
|
let meshlets_slice_end = meshlets_slice.end as u32 / size_of::<Meshlet>() as u32;
|
|
|
|
meshlets_slice_start..meshlets_slice_end
|
|
|
|
}
|
|
|
|
|
|
|
|
pub fn remove(&mut self, asset_id: &AssetId<MeshletMesh>) {
|
|
|
|
if let Some(
|
|
|
|
[vertex_data_slice, vertex_ids_slice, indices_slice, meshlets_slice, meshlet_bounding_spheres_slice],
|
|
|
|
) = self.meshlet_mesh_slices.remove(asset_id)
|
|
|
|
{
|
|
|
|
self.vertex_data.mark_slice_unused(vertex_data_slice);
|
|
|
|
self.vertex_ids.mark_slice_unused(vertex_ids_slice);
|
|
|
|
self.indices.mark_slice_unused(indices_slice);
|
|
|
|
self.meshlets.mark_slice_unused(meshlets_slice);
|
|
|
|
self.meshlet_bounding_spheres
|
|
|
|
.mark_slice_unused(meshlet_bounding_spheres_slice);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
/// Upload all newly queued [`MeshletMesh`] asset data to the GPU.
|
|
|
|
pub fn perform_pending_meshlet_mesh_writes(
|
|
|
|
mut meshlet_mesh_manager: ResMut<MeshletMeshManager>,
|
|
|
|
render_queue: Res<RenderQueue>,
|
|
|
|
render_device: Res<RenderDevice>,
|
|
|
|
) {
|
|
|
|
meshlet_mesh_manager
|
|
|
|
.vertex_data
|
|
|
|
.perform_writes(&render_queue, &render_device);
|
|
|
|
meshlet_mesh_manager
|
|
|
|
.vertex_ids
|
|
|
|
.perform_writes(&render_queue, &render_device);
|
|
|
|
meshlet_mesh_manager
|
|
|
|
.indices
|
|
|
|
.perform_writes(&render_queue, &render_device);
|
|
|
|
meshlet_mesh_manager
|
|
|
|
.meshlets
|
|
|
|
.perform_writes(&render_queue, &render_device);
|
|
|
|
meshlet_mesh_manager
|
|
|
|
.meshlet_bounding_spheres
|
|
|
|
.perform_writes(&render_queue, &render_device);
|
|
|
|
}
|