bevy/examples/2d/mesh2d_manual.rs

354 lines
13 KiB
Rust
Raw Normal View History

//! This example shows how to manually render 2d items using "mid level render apis" with a custom
//! pipeline for 2d meshes.
//! It doesn't use the [`Material2d`] abstraction, but changes the vertex buffer to include vertex color.
//! Check out the "mesh2d" example for simpler / higher level 2d meshes.
use bevy::{
Camera Driven Rendering (#4745) This adds "high level camera driven rendering" to Bevy. The goal is to give users more control over what gets rendered (and where) without needing to deal with render logic. This will make scenarios like "render to texture", "multiple windows", "split screen", "2d on 3d", "3d on 2d", "pass layering", and more significantly easier. Here is an [example of a 2d render sandwiched between two 3d renders (each from a different perspective)](https://gist.github.com/cart/4fe56874b2e53bc5594a182fc76f4915): ![image](https://user-images.githubusercontent.com/2694663/168411086-af13dec8-0093-4a84-bdd4-d4362d850ffa.png) Users can now spawn a camera, point it at a RenderTarget (a texture or a window), and it will "just work". Rendering to a second window is as simple as spawning a second camera and assigning it to a specific window id: ```rust // main camera (main window) commands.spawn_bundle(Camera2dBundle::default()); // second camera (other window) commands.spawn_bundle(Camera2dBundle { camera: Camera { target: RenderTarget::Window(window_id), ..default() }, ..default() }); ``` Rendering to a texture is as simple as pointing the camera at a texture: ```rust commands.spawn_bundle(Camera2dBundle { camera: Camera { target: RenderTarget::Texture(image_handle), ..default() }, ..default() }); ``` Cameras now have a "render priority", which controls the order they are drawn in. If you want to use a camera's output texture as a texture in the main pass, just set the priority to a number lower than the main pass camera (which defaults to `0`). ```rust // main pass camera with a default priority of 0 commands.spawn_bundle(Camera2dBundle::default()); commands.spawn_bundle(Camera2dBundle { camera: Camera { target: RenderTarget::Texture(image_handle.clone()), priority: -1, ..default() }, ..default() }); commands.spawn_bundle(SpriteBundle { texture: image_handle, ..default() }) ``` Priority can also be used to layer to cameras on top of each other for the same RenderTarget. This is what "2d on top of 3d" looks like in the new system: ```rust commands.spawn_bundle(Camera3dBundle::default()); commands.spawn_bundle(Camera2dBundle { camera: Camera { // this will render 2d entities "on top" of the default 3d camera's render priority: 1, ..default() }, ..default() }); ``` There is no longer the concept of a global "active camera". Resources like `ActiveCamera<Camera2d>` and `ActiveCamera<Camera3d>` have been replaced with the camera-specific `Camera::is_active` field. This does put the onus on users to manage which cameras should be active. Cameras are now assigned a single render graph as an "entry point", which is configured on each camera entity using the new `CameraRenderGraph` component. The old `PerspectiveCameraBundle` and `OrthographicCameraBundle` (generic on camera marker components like Camera2d and Camera3d) have been replaced by `Camera3dBundle` and `Camera2dBundle`, which set 3d and 2d default values for the `CameraRenderGraph` and projections. ```rust // old 3d perspective camera commands.spawn_bundle(PerspectiveCameraBundle::default()) // new 3d perspective camera commands.spawn_bundle(Camera3dBundle::default()) ``` ```rust // old 2d orthographic camera commands.spawn_bundle(OrthographicCameraBundle::new_2d()) // new 2d orthographic camera commands.spawn_bundle(Camera2dBundle::default()) ``` ```rust // old 3d orthographic camera commands.spawn_bundle(OrthographicCameraBundle::new_3d()) // new 3d orthographic camera commands.spawn_bundle(Camera3dBundle { projection: OrthographicProjection { scale: 3.0, scaling_mode: ScalingMode::FixedVertical, ..default() }.into(), ..default() }) ``` Note that `Camera3dBundle` now uses a new `Projection` enum instead of hard coding the projection into the type. There are a number of motivators for this change: the render graph is now a part of the bundle, the way "generic bundles" work in the rust type system prevents nice `..default()` syntax, and changing projections at runtime is much easier with an enum (ex for editor scenarios). I'm open to discussing this choice, but I'm relatively certain we will all come to the same conclusion here. Camera2dBundle and Camera3dBundle are much clearer than being generic on marker components / using non-default constructors. If you want to run a custom render graph on a camera, just set the `CameraRenderGraph` component: ```rust commands.spawn_bundle(Camera3dBundle { camera_render_graph: CameraRenderGraph::new(some_render_graph_name), ..default() }) ``` Just note that if the graph requires data from specific components to work (such as `Camera3d` config, which is provided in the `Camera3dBundle`), make sure the relevant components have been added. Speaking of using components to configure graphs / passes, there are a number of new configuration options: ```rust commands.spawn_bundle(Camera3dBundle { camera_3d: Camera3d { // overrides the default global clear color clear_color: ClearColorConfig::Custom(Color::RED), ..default() }, ..default() }) commands.spawn_bundle(Camera3dBundle { camera_3d: Camera3d { // disables clearing clear_color: ClearColorConfig::None, ..default() }, ..default() }) ``` Expect to see more of the "graph configuration Components on Cameras" pattern in the future. By popular demand, UI no longer requires a dedicated camera. `UiCameraBundle` has been removed. `Camera2dBundle` and `Camera3dBundle` now both default to rendering UI as part of their own render graphs. To disable UI rendering for a camera, disable it using the CameraUi component: ```rust commands .spawn_bundle(Camera3dBundle::default()) .insert(CameraUi { is_enabled: false, ..default() }) ``` ## Other Changes * The separate clear pass has been removed. We should revisit this for things like sky rendering, but I think this PR should "keep it simple" until we're ready to properly support that (for code complexity and performance reasons). We can come up with the right design for a modular clear pass in a followup pr. * I reorganized bevy_core_pipeline into Core2dPlugin and Core3dPlugin (and core_2d / core_3d modules). Everything is pretty much the same as before, just logically separate. I've moved relevant types (like Camera2d, Camera3d, Camera3dBundle, Camera2dBundle) into their relevant modules, which is what motivated this reorganization. * I adapted the `scene_viewer` example (which relied on the ActiveCameras behavior) to the new system. I also refactored bits and pieces to be a bit simpler. * All of the examples have been ported to the new camera approach. `render_to_texture` and `multiple_windows` are now _much_ simpler. I removed `two_passes` because it is less relevant with the new approach. If someone wants to add a new "layered custom pass with CameraRenderGraph" example, that might fill a similar niche. But I don't feel much pressure to add that in this pr. * Cameras now have `target_logical_size` and `target_physical_size` fields, which makes finding the size of a camera's render target _much_ simpler. As a result, the `Assets<Image>` and `Windows` parameters were removed from `Camera::world_to_screen`, making that operation much more ergonomic. * Render order ambiguities between cameras with the same target and the same priority now produce a warning. This accomplishes two goals: 1. Now that there is no "global" active camera, by default spawning two cameras will result in two renders (one covering the other). This would be a silent performance killer that would be hard to detect after the fact. By detecting ambiguities, we can provide a helpful warning when this occurs. 2. Render order ambiguities could result in unexpected / unpredictable render results. Resolving them makes sense. ## Follow Up Work * Per-Camera viewports, which will make it possible to render to a smaller area inside of a RenderTarget (great for something like splitscreen) * Camera-specific MSAA config (should use the same "overriding" pattern used for ClearColor) * Graph Based Camera Ordering: priorities are simple, but they make complicated ordering constraints harder to express. We should consider adopting a "graph based" camera ordering model with "before" and "after" relationships to other cameras (or build it "on top" of the priority system). * Consider allowing graphs to run subgraphs from any nest level (aka a global namespace for graphs). Right now the 2d and 3d graphs each need their own UI subgraph, which feels "fine" in the short term. But being able to share subgraphs between other subgraphs seems valuable. * Consider splitting `bevy_core_pipeline` into `bevy_core_2d` and `bevy_core_3d` packages. Theres a shared "clear color" dependency here, which would need a new home.
2022-06-02 00:12:17 +00:00
core_pipeline::core_2d::Transparent2d,
prelude::*,
reflect::TypeUuid,
render::{
mesh::{Indices, MeshVertexAttribute},
render_asset::RenderAssets,
render_phase::{AddRenderCommand, DrawFunctions, RenderPhase, SetItemPipeline},
render_resource::{
BlendState, ColorTargetState, ColorWrites, Face, FragmentState, FrontFace,
MultisampleState, PipelineCache, PolygonMode, PrimitiveState, PrimitiveTopology,
RenderPipelineDescriptor, SpecializedRenderPipeline, SpecializedRenderPipelines,
TextureFormat, VertexBufferLayout, VertexFormat, VertexState, VertexStepMode,
},
texture::BevyDefault,
view::VisibleEntities,
RenderApp, RenderStage,
},
sprite::{
DrawMesh2d, Mesh2dHandle, Mesh2dPipeline, Mesh2dPipelineKey, Mesh2dUniform,
SetMesh2dBindGroup, SetMesh2dViewBindGroup,
},
utils::FloatOrd,
};
fn main() {
App::new()
.add_plugins(DefaultPlugins)
.add_plugin(ColoredMesh2dPlugin)
.add_startup_system(star)
.run();
}
fn star(
mut commands: Commands,
// We will add a new Mesh for the star being created
mut meshes: ResMut<Assets<Mesh>>,
) {
// Let's define the mesh for the object we want to draw: a nice star.
// We will specify here what kind of topology is used to define the mesh,
// that is, how triangles are built from the vertices. We will use a
// triangle list, meaning that each vertex of the triangle has to be
// specified.
let mut star = Mesh::new(PrimitiveTopology::TriangleList);
// Vertices need to have a position attribute. We will use the following
// vertices (I hope you can spot the star in the schema).
//
// 1
//
// 10 2
// 9 0 3
// 8 4
// 6
// 7 5
//
// These vertices are specificed in 3D space.
let mut v_pos = vec![[0.0, 0.0, 0.0]];
for i in 0..10 {
// Angle of each vertex is 1/10 of TAU, plus PI/2 for positioning vertex 0
let a = std::f32::consts::FRAC_PI_2 - i as f32 * std::f32::consts::TAU / 10.0;
// Radius of internal vertices (2, 4, 6, 8, 10) is 100, it's 200 for external
let r = (1 - i % 2) as f32 * 100.0 + 100.0;
// Add the vertex coordinates
v_pos.push([r * a.cos(), r * a.sin(), 0.0]);
}
// Set the position attribute
Mesh vertex buffer layouts (#3959) This PR makes a number of changes to how meshes and vertex attributes are handled, which the goal of enabling easy and flexible custom vertex attributes: * Reworks the `Mesh` type to use the newly added `VertexAttribute` internally * `VertexAttribute` defines the name, a unique `VertexAttributeId`, and a `VertexFormat` * `VertexAttributeId` is used to produce consistent sort orders for vertex buffer generation, replacing the more expensive and often surprising "name based sorting" * Meshes can be used to generate a `MeshVertexBufferLayout`, which defines the layout of the gpu buffer produced by the mesh. `MeshVertexBufferLayouts` can then be used to generate actual `VertexBufferLayouts` according to the requirements of a specific pipeline. This decoupling of "mesh layout" vs "pipeline vertex buffer layout" is what enables custom attributes. We don't need to standardize _mesh layouts_ or contort meshes to meet the needs of a specific pipeline. As long as the mesh has what the pipeline needs, it will work transparently. * Mesh-based pipelines now specialize on `&MeshVertexBufferLayout` via the new `SpecializedMeshPipeline` trait (which behaves like `SpecializedPipeline`, but adds `&MeshVertexBufferLayout`). The integrity of the pipeline cache is maintained because the `MeshVertexBufferLayout` is treated as part of the key (which is fully abstracted from implementers of the trait ... no need to add any additional info to the specialization key). * Hashing `MeshVertexBufferLayout` is too expensive to do for every entity, every frame. To make this scalable, I added a generalized "pre-hashing" solution to `bevy_utils`: `Hashed<T>` keys and `PreHashMap<K, V>` (which uses `Hashed<T>` internally) . Why didn't I just do the quick and dirty in-place "pre-compute hash and use that u64 as a key in a hashmap" that we've done in the past? Because its wrong! Hashes by themselves aren't enough because two different values can produce the same hash. Re-hashing a hash is even worse! I decided to build a generalized solution because this pattern has come up in the past and we've chosen to do the wrong thing. Now we can do the right thing! This did unfortunately require pulling in `hashbrown` and using that in `bevy_utils`, because avoiding re-hashes requires the `raw_entry_mut` api, which isn't stabilized yet (and may never be ... `entry_ref` has favor now, but also isn't available yet). If std's HashMap ever provides the tools we need, we can move back to that. Note that adding `hashbrown` doesn't increase our dependency count because it was already in our tree. I will probably break these changes out into their own PR. * Specializing on `MeshVertexBufferLayout` has one non-obvious behavior: it can produce identical pipelines for two different MeshVertexBufferLayouts. To optimize the number of active pipelines / reduce re-binds while drawing, I de-duplicate pipelines post-specialization using the final `VertexBufferLayout` as the key. For example, consider a pipeline that needs the layout `(position, normal)` and is specialized using two meshes: `(position, normal, uv)` and `(position, normal, other_vec2)`. If both of these meshes result in `(position, normal)` specializations, we can use the same pipeline! Now we do. Cool! To briefly illustrate, this is what the relevant section of `MeshPipeline`'s specialization code looks like now: ```rust impl SpecializedMeshPipeline for MeshPipeline { type Key = MeshPipelineKey; fn specialize( &self, key: Self::Key, layout: &MeshVertexBufferLayout, ) -> RenderPipelineDescriptor { let mut vertex_attributes = vec![ Mesh::ATTRIBUTE_POSITION.at_shader_location(0), Mesh::ATTRIBUTE_NORMAL.at_shader_location(1), Mesh::ATTRIBUTE_UV_0.at_shader_location(2), ]; let mut shader_defs = Vec::new(); if layout.contains(Mesh::ATTRIBUTE_TANGENT) { shader_defs.push(String::from("VERTEX_TANGENTS")); vertex_attributes.push(Mesh::ATTRIBUTE_TANGENT.at_shader_location(3)); } let vertex_buffer_layout = layout .get_layout(&vertex_attributes) .expect("Mesh is missing a vertex attribute"); ``` Notice that this is _much_ simpler than it was before. And now any mesh with any layout can be used with this pipeline, provided it has vertex postions, normals, and uvs. We even got to remove `HAS_TANGENTS` from MeshPipelineKey and `has_tangents` from `GpuMesh`, because that information is redundant with `MeshVertexBufferLayout`. This is still a draft because I still need to: * Add more docs * Experiment with adding error handling to mesh pipeline specialization (which would print errors at runtime when a mesh is missing a vertex attribute required by a pipeline). If it doesn't tank perf, we'll keep it. * Consider breaking out the PreHash / hashbrown changes into a separate PR. * Add an example illustrating this change * Verify that the "mesh-specialized pipeline de-duplication code" works properly Please dont yell at me for not doing these things yet :) Just trying to get this in peoples' hands asap. Alternative to #3120 Fixes #3030 Co-authored-by: Carter Anderson <mcanders1@gmail.com>
2022-02-23 23:21:13 +00:00
star.insert_attribute(Mesh::ATTRIBUTE_POSITION, v_pos);
// And a RGB color attribute as well
let mut v_color: Vec<u32> = vec![Color::BLACK.as_linear_rgba_u32()];
v_color.extend_from_slice(&[Color::YELLOW.as_linear_rgba_u32(); 10]);
star.insert_attribute(
MeshVertexAttribute::new("Vertex_Color", 1, VertexFormat::Uint32),
v_color,
);
// Now, we specify the indices of the vertex that are going to compose the
// triangles in our star. Vertices in triangles have to be specified in CCW
// winding (that will be the front face, colored). Since we are using
// triangle list, we will specify each triangle as 3 vertices
// First triangle: 0, 2, 1
// Second triangle: 0, 3, 2
// Third triangle: 0, 4, 3
// etc
// Last triangle: 0, 1, 10
let mut indices = vec![0, 1, 10];
for i in 2..=10 {
indices.extend_from_slice(&[0, i, i - 1]);
}
star.set_indices(Some(Indices::U32(indices)));
// We can now spawn the entities for the star and the camera
commands.spawn_bundle((
// We use a marker component to identify the custom colored meshes
ColoredMesh2d::default(),
// The `Handle<Mesh>` needs to be wrapped in a `Mesh2dHandle` to use 2d rendering instead of 3d
Mesh2dHandle(meshes.add(star)),
// These other components are needed for 2d meshes to be rendered
Transform::default(),
GlobalTransform::default(),
Visibility::default(),
ComputedVisibility::default(),
));
commands
// And use an orthographic projection
Camera Driven Rendering (#4745) This adds "high level camera driven rendering" to Bevy. The goal is to give users more control over what gets rendered (and where) without needing to deal with render logic. This will make scenarios like "render to texture", "multiple windows", "split screen", "2d on 3d", "3d on 2d", "pass layering", and more significantly easier. Here is an [example of a 2d render sandwiched between two 3d renders (each from a different perspective)](https://gist.github.com/cart/4fe56874b2e53bc5594a182fc76f4915): ![image](https://user-images.githubusercontent.com/2694663/168411086-af13dec8-0093-4a84-bdd4-d4362d850ffa.png) Users can now spawn a camera, point it at a RenderTarget (a texture or a window), and it will "just work". Rendering to a second window is as simple as spawning a second camera and assigning it to a specific window id: ```rust // main camera (main window) commands.spawn_bundle(Camera2dBundle::default()); // second camera (other window) commands.spawn_bundle(Camera2dBundle { camera: Camera { target: RenderTarget::Window(window_id), ..default() }, ..default() }); ``` Rendering to a texture is as simple as pointing the camera at a texture: ```rust commands.spawn_bundle(Camera2dBundle { camera: Camera { target: RenderTarget::Texture(image_handle), ..default() }, ..default() }); ``` Cameras now have a "render priority", which controls the order they are drawn in. If you want to use a camera's output texture as a texture in the main pass, just set the priority to a number lower than the main pass camera (which defaults to `0`). ```rust // main pass camera with a default priority of 0 commands.spawn_bundle(Camera2dBundle::default()); commands.spawn_bundle(Camera2dBundle { camera: Camera { target: RenderTarget::Texture(image_handle.clone()), priority: -1, ..default() }, ..default() }); commands.spawn_bundle(SpriteBundle { texture: image_handle, ..default() }) ``` Priority can also be used to layer to cameras on top of each other for the same RenderTarget. This is what "2d on top of 3d" looks like in the new system: ```rust commands.spawn_bundle(Camera3dBundle::default()); commands.spawn_bundle(Camera2dBundle { camera: Camera { // this will render 2d entities "on top" of the default 3d camera's render priority: 1, ..default() }, ..default() }); ``` There is no longer the concept of a global "active camera". Resources like `ActiveCamera<Camera2d>` and `ActiveCamera<Camera3d>` have been replaced with the camera-specific `Camera::is_active` field. This does put the onus on users to manage which cameras should be active. Cameras are now assigned a single render graph as an "entry point", which is configured on each camera entity using the new `CameraRenderGraph` component. The old `PerspectiveCameraBundle` and `OrthographicCameraBundle` (generic on camera marker components like Camera2d and Camera3d) have been replaced by `Camera3dBundle` and `Camera2dBundle`, which set 3d and 2d default values for the `CameraRenderGraph` and projections. ```rust // old 3d perspective camera commands.spawn_bundle(PerspectiveCameraBundle::default()) // new 3d perspective camera commands.spawn_bundle(Camera3dBundle::default()) ``` ```rust // old 2d orthographic camera commands.spawn_bundle(OrthographicCameraBundle::new_2d()) // new 2d orthographic camera commands.spawn_bundle(Camera2dBundle::default()) ``` ```rust // old 3d orthographic camera commands.spawn_bundle(OrthographicCameraBundle::new_3d()) // new 3d orthographic camera commands.spawn_bundle(Camera3dBundle { projection: OrthographicProjection { scale: 3.0, scaling_mode: ScalingMode::FixedVertical, ..default() }.into(), ..default() }) ``` Note that `Camera3dBundle` now uses a new `Projection` enum instead of hard coding the projection into the type. There are a number of motivators for this change: the render graph is now a part of the bundle, the way "generic bundles" work in the rust type system prevents nice `..default()` syntax, and changing projections at runtime is much easier with an enum (ex for editor scenarios). I'm open to discussing this choice, but I'm relatively certain we will all come to the same conclusion here. Camera2dBundle and Camera3dBundle are much clearer than being generic on marker components / using non-default constructors. If you want to run a custom render graph on a camera, just set the `CameraRenderGraph` component: ```rust commands.spawn_bundle(Camera3dBundle { camera_render_graph: CameraRenderGraph::new(some_render_graph_name), ..default() }) ``` Just note that if the graph requires data from specific components to work (such as `Camera3d` config, which is provided in the `Camera3dBundle`), make sure the relevant components have been added. Speaking of using components to configure graphs / passes, there are a number of new configuration options: ```rust commands.spawn_bundle(Camera3dBundle { camera_3d: Camera3d { // overrides the default global clear color clear_color: ClearColorConfig::Custom(Color::RED), ..default() }, ..default() }) commands.spawn_bundle(Camera3dBundle { camera_3d: Camera3d { // disables clearing clear_color: ClearColorConfig::None, ..default() }, ..default() }) ``` Expect to see more of the "graph configuration Components on Cameras" pattern in the future. By popular demand, UI no longer requires a dedicated camera. `UiCameraBundle` has been removed. `Camera2dBundle` and `Camera3dBundle` now both default to rendering UI as part of their own render graphs. To disable UI rendering for a camera, disable it using the CameraUi component: ```rust commands .spawn_bundle(Camera3dBundle::default()) .insert(CameraUi { is_enabled: false, ..default() }) ``` ## Other Changes * The separate clear pass has been removed. We should revisit this for things like sky rendering, but I think this PR should "keep it simple" until we're ready to properly support that (for code complexity and performance reasons). We can come up with the right design for a modular clear pass in a followup pr. * I reorganized bevy_core_pipeline into Core2dPlugin and Core3dPlugin (and core_2d / core_3d modules). Everything is pretty much the same as before, just logically separate. I've moved relevant types (like Camera2d, Camera3d, Camera3dBundle, Camera2dBundle) into their relevant modules, which is what motivated this reorganization. * I adapted the `scene_viewer` example (which relied on the ActiveCameras behavior) to the new system. I also refactored bits and pieces to be a bit simpler. * All of the examples have been ported to the new camera approach. `render_to_texture` and `multiple_windows` are now _much_ simpler. I removed `two_passes` because it is less relevant with the new approach. If someone wants to add a new "layered custom pass with CameraRenderGraph" example, that might fill a similar niche. But I don't feel much pressure to add that in this pr. * Cameras now have `target_logical_size` and `target_physical_size` fields, which makes finding the size of a camera's render target _much_ simpler. As a result, the `Assets<Image>` and `Windows` parameters were removed from `Camera::world_to_screen`, making that operation much more ergonomic. * Render order ambiguities between cameras with the same target and the same priority now produce a warning. This accomplishes two goals: 1. Now that there is no "global" active camera, by default spawning two cameras will result in two renders (one covering the other). This would be a silent performance killer that would be hard to detect after the fact. By detecting ambiguities, we can provide a helpful warning when this occurs. 2. Render order ambiguities could result in unexpected / unpredictable render results. Resolving them makes sense. ## Follow Up Work * Per-Camera viewports, which will make it possible to render to a smaller area inside of a RenderTarget (great for something like splitscreen) * Camera-specific MSAA config (should use the same "overriding" pattern used for ClearColor) * Graph Based Camera Ordering: priorities are simple, but they make complicated ordering constraints harder to express. We should consider adopting a "graph based" camera ordering model with "before" and "after" relationships to other cameras (or build it "on top" of the priority system). * Consider allowing graphs to run subgraphs from any nest level (aka a global namespace for graphs). Right now the 2d and 3d graphs each need their own UI subgraph, which feels "fine" in the short term. But being able to share subgraphs between other subgraphs seems valuable. * Consider splitting `bevy_core_pipeline` into `bevy_core_2d` and `bevy_core_3d` packages. Theres a shared "clear color" dependency here, which would need a new home.
2022-06-02 00:12:17 +00:00
.spawn_bundle(Camera2dBundle::default());
}
/// A marker component for colored 2d meshes
#[derive(Component, Default)]
pub struct ColoredMesh2d;
/// Custom pipeline for 2d meshes with vertex colors
pub struct ColoredMesh2dPipeline {
/// this pipeline wraps the standard [`Mesh2dPipeline`]
mesh2d_pipeline: Mesh2dPipeline,
}
impl FromWorld for ColoredMesh2dPipeline {
fn from_world(world: &mut World) -> Self {
Self {
mesh2d_pipeline: Mesh2dPipeline::from_world(world),
}
}
}
// We implement `SpecializedPipeline` to customize the default rendering from `Mesh2dPipeline`
impl SpecializedRenderPipeline for ColoredMesh2dPipeline {
type Key = Mesh2dPipelineKey;
fn specialize(&self, key: Self::Key) -> RenderPipelineDescriptor {
// Customize how to store the meshes' vertex attributes in the vertex buffer
// Our meshes only have position and color
let formats = vec![
// Position
VertexFormat::Float32x3,
// Color
VertexFormat::Uint32,
];
let vertex_layout =
VertexBufferLayout::from_vertex_formats(VertexStepMode::Vertex, formats);
RenderPipelineDescriptor {
vertex: VertexState {
// Use our custom shader
shader: COLORED_MESH2D_SHADER_HANDLE.typed::<Shader>(),
entry_point: "vertex".into(),
shader_defs: Vec::new(),
// Use our custom vertex buffer
buffers: vec![vertex_layout],
},
fragment: Some(FragmentState {
// Use our custom shader
shader: COLORED_MESH2D_SHADER_HANDLE.typed::<Shader>(),
shader_defs: Vec::new(),
entry_point: "fragment".into(),
targets: vec![ColorTargetState {
format: TextureFormat::bevy_default(),
blend: Some(BlendState::ALPHA_BLENDING),
write_mask: ColorWrites::ALL,
}],
}),
// Use the two standard uniforms for 2d meshes
layout: Some(vec![
// Bind group 0 is the view uniform
self.mesh2d_pipeline.view_layout.clone(),
// Bind group 1 is the mesh uniform
self.mesh2d_pipeline.mesh_layout.clone(),
]),
primitive: PrimitiveState {
front_face: FrontFace::Ccw,
cull_mode: Some(Face::Back),
unclipped_depth: false,
polygon_mode: PolygonMode::Fill,
conservative: false,
topology: key.primitive_topology(),
strip_index_format: None,
},
depth_stencil: None,
multisample: MultisampleState {
count: key.msaa_samples(),
mask: !0,
alpha_to_coverage_enabled: false,
},
label: Some("colored_mesh2d_pipeline".into()),
}
}
}
// This specifies how to render a colored 2d mesh
type DrawColoredMesh2d = (
// Set the pipeline
SetItemPipeline,
// Set the view uniform as bind group 0
SetMesh2dViewBindGroup<0>,
// Set the mesh uniform as bind group 1
SetMesh2dBindGroup<1>,
// Draw the mesh
DrawMesh2d,
);
// The custom shader can be inline like here, included from another file at build time
// using `include_str!()`, or loaded like any other asset with `asset_server.load()`.
const COLORED_MESH2D_SHADER: &str = r"
// Import the standard 2d mesh uniforms and set their bind groups
2022-05-31 23:23:25 +00:00
#import bevy_sprite::mesh2d_types
#import bevy_sprite::mesh2d_view_bindings
[[group(1), binding(0)]]
var<uniform> mesh: Mesh2d;
Add reusable shader functions for transforming position/normal/tangent (#4901) # Objective - Add reusable shader functions for transforming positions / normals / tangents between local and world / clip space for 2D and 3D so that they are done in a simple and correct way - The next step in #3969 so check there for more details. ## Solution - Add `bevy_pbr::mesh_functions` and `bevy_sprite::mesh2d_functions` shader imports - These contain `mesh_` and `mesh2d_` versions of the following functions: - `mesh_position_local_to_world` - `mesh_position_world_to_clip` - `mesh_position_local_to_clip` - `mesh_normal_local_to_world` - `mesh_tangent_local_to_world` - Use them everywhere where it is appropriate - Notably not in the sprite and UI shaders where `mesh2d_position_world_to_clip` could have been used, but including all the functions depends on the mesh binding so I chose to not use the function there - NOTE: The `mesh_` and `mesh2d_` functions are currently identical. However, if I had defined only `bevy_pbr::mesh_functions` and used that in bevy_sprite, then bevy_sprite would have a runtime dependency on bevy_pbr, which seems undesirable. I also expect that when we have a proper 2D rendering API, these functions will diverge between 2D and 3D. --- ## Changelog - Added: `bevy_pbr::mesh_functions` and `bevy_sprite::mesh2d_functions` shader imports containing `mesh_` and `mesh2d_` versions of the following functions: - `mesh_position_local_to_world` - `mesh_position_world_to_clip` - `mesh_position_local_to_clip` - `mesh_normal_local_to_world` - `mesh_tangent_local_to_world` ## Migration Guide - The `skin_tangents` function from the `bevy_pbr::skinning` shader import has been replaced with the `mesh_tangent_local_to_world` function from the `bevy_pbr::mesh_functions` shader import
2022-06-14 00:32:33 +00:00
// NOTE: Bindings must come before functions that use them!
#import bevy_sprite::mesh2d_functions
// The structure of the vertex buffer is as specified in `specialize()`
struct Vertex {
[[location(0)]] position: vec3<f32>;
[[location(1)]] color: u32;
};
struct VertexOutput {
// The vertex shader must set the on-screen position of the vertex
[[builtin(position)]] clip_position: vec4<f32>;
// We pass the vertex color to the fragment shader in location 0
[[location(0)]] color: vec4<f32>;
};
/// Entry point for the vertex shader
[[stage(vertex)]]
fn vertex(vertex: Vertex) -> VertexOutput {
var out: VertexOutput;
// Project the world position of the mesh into screen position
Add reusable shader functions for transforming position/normal/tangent (#4901) # Objective - Add reusable shader functions for transforming positions / normals / tangents between local and world / clip space for 2D and 3D so that they are done in a simple and correct way - The next step in #3969 so check there for more details. ## Solution - Add `bevy_pbr::mesh_functions` and `bevy_sprite::mesh2d_functions` shader imports - These contain `mesh_` and `mesh2d_` versions of the following functions: - `mesh_position_local_to_world` - `mesh_position_world_to_clip` - `mesh_position_local_to_clip` - `mesh_normal_local_to_world` - `mesh_tangent_local_to_world` - Use them everywhere where it is appropriate - Notably not in the sprite and UI shaders where `mesh2d_position_world_to_clip` could have been used, but including all the functions depends on the mesh binding so I chose to not use the function there - NOTE: The `mesh_` and `mesh2d_` functions are currently identical. However, if I had defined only `bevy_pbr::mesh_functions` and used that in bevy_sprite, then bevy_sprite would have a runtime dependency on bevy_pbr, which seems undesirable. I also expect that when we have a proper 2D rendering API, these functions will diverge between 2D and 3D. --- ## Changelog - Added: `bevy_pbr::mesh_functions` and `bevy_sprite::mesh2d_functions` shader imports containing `mesh_` and `mesh2d_` versions of the following functions: - `mesh_position_local_to_world` - `mesh_position_world_to_clip` - `mesh_position_local_to_clip` - `mesh_normal_local_to_world` - `mesh_tangent_local_to_world` ## Migration Guide - The `skin_tangents` function from the `bevy_pbr::skinning` shader import has been replaced with the `mesh_tangent_local_to_world` function from the `bevy_pbr::mesh_functions` shader import
2022-06-14 00:32:33 +00:00
out.clip_position = mesh2d_position_local_to_clip(mesh.model, vec4<f32>(vertex.position, 1.0));
// Unpack the `u32` from the vertex buffer into the `vec4<f32>` used by the fragment shader
out.color = vec4<f32>((vec4<u32>(vertex.color) >> vec4<u32>(0u, 8u, 16u, 24u)) & vec4<u32>(255u)) / 255.0;
return out;
}
// The input of the fragment shader must correspond to the output of the vertex shader for all `location`s
struct FragmentInput {
// The color is interpolated between vertices by default
[[location(0)]] color: vec4<f32>;
};
/// Entry point for the fragment shader
[[stage(fragment)]]
fn fragment(in: FragmentInput) -> [[location(0)]] vec4<f32> {
return in.color;
}
";
/// Plugin that renders [`ColoredMesh2d`]s
pub struct ColoredMesh2dPlugin;
/// Handle to the custom shader with a unique random ID
pub const COLORED_MESH2D_SHADER_HANDLE: HandleUntyped =
HandleUntyped::weak_from_u64(Shader::TYPE_UUID, 13828845428412094821);
impl Plugin for ColoredMesh2dPlugin {
fn build(&self, app: &mut App) {
// Load our custom shader
let mut shaders = app.world.resource_mut::<Assets<Shader>>();
shaders.set_untracked(
COLORED_MESH2D_SHADER_HANDLE,
Shader::from_wgsl(COLORED_MESH2D_SHADER),
);
// Register our custom draw function and pipeline, and add our render systems
let render_app = app.get_sub_app_mut(RenderApp).unwrap();
render_app
.add_render_command::<Transparent2d, DrawColoredMesh2d>()
.init_resource::<ColoredMesh2dPipeline>()
.init_resource::<SpecializedRenderPipelines<ColoredMesh2dPipeline>>()
.add_system_to_stage(RenderStage::Extract, extract_colored_mesh2d)
.add_system_to_stage(RenderStage::Queue, queue_colored_mesh2d);
}
}
/// Extract the [`ColoredMesh2d`] marker component into the render app
pub fn extract_colored_mesh2d(
mut commands: Commands,
mut previous_len: Local<usize>,
query: Query<(Entity, &ComputedVisibility), With<ColoredMesh2d>>,
) {
let mut values = Vec::with_capacity(*previous_len);
for (entity, computed_visibility) in query.iter() {
if !computed_visibility.is_visible {
continue;
}
values.push((entity, (ColoredMesh2d,)));
}
*previous_len = values.len();
commands.insert_or_spawn_batch(values);
}
/// Queue the 2d meshes marked with [`ColoredMesh2d`] using our custom pipeline and draw function
#[allow(clippy::too_many_arguments)]
pub fn queue_colored_mesh2d(
transparent_draw_functions: Res<DrawFunctions<Transparent2d>>,
colored_mesh2d_pipeline: Res<ColoredMesh2dPipeline>,
mut pipelines: ResMut<SpecializedRenderPipelines<ColoredMesh2dPipeline>>,
mut pipeline_cache: ResMut<PipelineCache>,
msaa: Res<Msaa>,
render_meshes: Res<RenderAssets<Mesh>>,
colored_mesh2d: Query<(&Mesh2dHandle, &Mesh2dUniform), With<ColoredMesh2d>>,
mut views: Query<(&VisibleEntities, &mut RenderPhase<Transparent2d>)>,
) {
if colored_mesh2d.is_empty() {
return;
}
// Iterate each view (a camera is a view)
for (visible_entities, mut transparent_phase) in views.iter_mut() {
let draw_colored_mesh2d = transparent_draw_functions
.read()
.get_id::<DrawColoredMesh2d>()
.unwrap();
let mesh_key = Mesh2dPipelineKey::from_msaa_samples(msaa.samples);
// Queue all entities visible to that view
for visible_entity in &visible_entities.entities {
if let Ok((mesh2d_handle, mesh2d_uniform)) = colored_mesh2d.get(*visible_entity) {
// Get our specialized pipeline
let mut mesh2d_key = mesh_key;
if let Some(mesh) = render_meshes.get(&mesh2d_handle.0) {
mesh2d_key |=
Mesh2dPipelineKey::from_primitive_topology(mesh.primitive_topology);
}
let pipeline_id =
pipelines.specialize(&mut pipeline_cache, &colored_mesh2d_pipeline, mesh2d_key);
let mesh_z = mesh2d_uniform.transform.w_axis.z;
transparent_phase.add(Transparent2d {
entity: *visible_entity,
draw_function: draw_colored_mesh2d,
pipeline: pipeline_id,
// The 2d render items are sorted according to their z value before rendering,
// in order to get correct transparency
sort_key: FloatOrd(mesh_z),
// This material is not batched
batch_range: None,
});
}
}
}
}