Camera Driven Rendering (#4745)

This adds "high level camera driven rendering" to Bevy. The goal is to give users more control over what gets rendered (and where) without needing to deal with render logic. This will make scenarios like "render to texture", "multiple windows", "split screen", "2d on 3d", "3d on 2d", "pass layering", and more significantly easier. 

Here is an [example of a 2d render sandwiched between two 3d renders (each from a different perspective)](https://gist.github.com/cart/4fe56874b2e53bc5594a182fc76f4915):
![image](https://user-images.githubusercontent.com/2694663/168411086-af13dec8-0093-4a84-bdd4-d4362d850ffa.png)

Users can now spawn a camera, point it at a RenderTarget (a texture or a window), and it will "just work". 

Rendering to a second window is as simple as spawning a second camera and assigning it to a specific window id:
```rust
// main camera (main window)
commands.spawn_bundle(Camera2dBundle::default());

// second camera (other window)
commands.spawn_bundle(Camera2dBundle {
    camera: Camera {
        target: RenderTarget::Window(window_id),
        ..default()
    },
    ..default()
});
```

Rendering to a texture is as simple as pointing the camera at a texture:

```rust
commands.spawn_bundle(Camera2dBundle {
    camera: Camera {
        target: RenderTarget::Texture(image_handle),
        ..default()
    },
    ..default()
});
```

Cameras now have a "render priority", which controls the order they are drawn in. If you want to use a camera's output texture as a texture in the main pass, just set the priority to a number lower than the main pass camera (which defaults to `0`).

```rust
// main pass camera with a default priority of 0
commands.spawn_bundle(Camera2dBundle::default());

commands.spawn_bundle(Camera2dBundle {
    camera: Camera {
        target: RenderTarget::Texture(image_handle.clone()),
        priority: -1,
        ..default()
    },
    ..default()
});

commands.spawn_bundle(SpriteBundle {
    texture: image_handle,
    ..default()
})
```

Priority can also be used to layer to cameras on top of each other for the same RenderTarget. This is what "2d on top of 3d" looks like in the new system:

```rust
commands.spawn_bundle(Camera3dBundle::default());

commands.spawn_bundle(Camera2dBundle {
    camera: Camera {
        // this will render 2d entities "on top" of the default 3d camera's render
        priority: 1,
        ..default()
    },
    ..default()
});
```

There is no longer the concept of a global "active camera". Resources like `ActiveCamera<Camera2d>` and `ActiveCamera<Camera3d>` have been replaced with the camera-specific `Camera::is_active` field. This does put the onus on users to manage which cameras should be active.

Cameras are now assigned a single render graph as an "entry point", which is configured on each camera entity using the new `CameraRenderGraph` component. The old `PerspectiveCameraBundle` and `OrthographicCameraBundle` (generic on camera marker components like Camera2d and Camera3d) have been replaced by `Camera3dBundle` and `Camera2dBundle`, which set 3d and 2d default values for the `CameraRenderGraph` and projections.

```rust
// old 3d perspective camera
commands.spawn_bundle(PerspectiveCameraBundle::default())

// new 3d perspective camera
commands.spawn_bundle(Camera3dBundle::default())
```

```rust
// old 2d orthographic camera
commands.spawn_bundle(OrthographicCameraBundle::new_2d())

// new 2d orthographic camera
commands.spawn_bundle(Camera2dBundle::default())
```

```rust
// old 3d orthographic camera
commands.spawn_bundle(OrthographicCameraBundle::new_3d())

// new 3d orthographic camera
commands.spawn_bundle(Camera3dBundle {
    projection: OrthographicProjection {
        scale: 3.0,
        scaling_mode: ScalingMode::FixedVertical,
        ..default()
    }.into(),
    ..default()
})
```

Note that `Camera3dBundle` now uses a new `Projection` enum instead of hard coding the projection into the type. There are a number of motivators for this change: the render graph is now a part of the bundle, the way "generic bundles" work in the rust type system prevents nice `..default()` syntax, and changing projections at runtime is much easier with an enum (ex for editor scenarios). I'm open to discussing this choice, but I'm relatively certain we will all come to the same conclusion here. Camera2dBundle and Camera3dBundle are much clearer than being generic on marker components / using non-default constructors.

If you want to run a custom render graph on a camera, just set the `CameraRenderGraph` component:

```rust
commands.spawn_bundle(Camera3dBundle {
    camera_render_graph: CameraRenderGraph::new(some_render_graph_name),
    ..default()
})
```

Just note that if the graph requires data from specific components to work (such as `Camera3d` config, which is provided in the `Camera3dBundle`), make sure the relevant components have been added.

Speaking of using components to configure graphs / passes, there are a number of new configuration options:

```rust
commands.spawn_bundle(Camera3dBundle {
    camera_3d: Camera3d {
        // overrides the default global clear color 
        clear_color: ClearColorConfig::Custom(Color::RED),
        ..default()
    },
    ..default()
})

commands.spawn_bundle(Camera3dBundle {
    camera_3d: Camera3d {
        // disables clearing
        clear_color: ClearColorConfig::None,
        ..default()
    },
    ..default()
})
```

Expect to see more of the "graph configuration Components on Cameras" pattern in the future.

By popular demand, UI no longer requires a dedicated camera. `UiCameraBundle` has been removed. `Camera2dBundle` and `Camera3dBundle` now both default to rendering UI as part of their own render graphs. To disable UI rendering for a camera, disable it using the CameraUi component:

```rust
commands
    .spawn_bundle(Camera3dBundle::default())
    .insert(CameraUi {
        is_enabled: false,
        ..default()
    })
```

## Other Changes

* The separate clear pass has been removed. We should revisit this for things like sky rendering, but I think this PR should "keep it simple" until we're ready to properly support that (for code complexity and performance reasons). We can come up with the right design for a modular clear pass in a followup pr.
* I reorganized bevy_core_pipeline into Core2dPlugin and Core3dPlugin (and core_2d / core_3d modules). Everything is pretty much the same as before, just logically separate. I've moved relevant types (like Camera2d, Camera3d, Camera3dBundle, Camera2dBundle) into their relevant modules, which is what motivated this reorganization.
* I adapted the `scene_viewer` example (which relied on the ActiveCameras behavior) to the new system. I also refactored bits and pieces to be a bit simpler. 
* All of the examples have been ported to the new camera approach. `render_to_texture` and `multiple_windows` are now _much_ simpler. I removed `two_passes` because it is less relevant with the new approach. If someone wants to add a new "layered custom pass with CameraRenderGraph" example, that might fill a similar niche. But I don't feel much pressure to add that in this pr.
* Cameras now have `target_logical_size` and `target_physical_size` fields, which makes finding the size of a camera's render target _much_ simpler. As a result, the `Assets<Image>` and `Windows` parameters were removed from `Camera::world_to_screen`, making that operation much more ergonomic.
* Render order ambiguities between cameras with the same target and the same priority now produce a warning. This accomplishes two goals:
    1. Now that there is no "global" active camera, by default spawning two cameras will result in two renders (one covering the other). This would be a silent performance killer that would be hard to detect after the fact. By detecting ambiguities, we can provide a helpful warning when this occurs.
    2. Render order ambiguities could result in unexpected / unpredictable render results. Resolving them makes sense.

## Follow Up Work

* Per-Camera viewports, which will make it possible to render to a smaller area inside of a RenderTarget (great for something like splitscreen)
* Camera-specific MSAA config (should use the same "overriding" pattern used for ClearColor)
* Graph Based Camera Ordering: priorities are simple, but they make complicated ordering constraints harder to express. We should consider adopting a "graph based" camera ordering model with "before" and "after" relationships to other cameras (or build it "on top" of the priority system).
* Consider allowing graphs to run subgraphs from any nest level (aka a global namespace for graphs). Right now the 2d and 3d graphs each need their own UI subgraph, which feels "fine" in the short term. But being able to share subgraphs between other subgraphs seems valuable.
* Consider splitting `bevy_core_pipeline` into `bevy_core_2d` and `bevy_core_3d` packages. Theres a shared "clear color" dependency here, which would need a new home.
This commit is contained in:
Carter Anderson 2022-06-02 00:12:17 +00:00
parent f2b53de4aa
commit f487407e07
120 changed files with 1537 additions and 1742 deletions

View file

@ -19,7 +19,12 @@ trace = []
# bevy
bevy_app = { path = "../bevy_app", version = "0.8.0-dev" }
bevy_asset = { path = "../bevy_asset", version = "0.8.0-dev" }
bevy_derive = { path = "../bevy_derive", version = "0.8.0-dev" }
bevy_ecs = { path = "../bevy_ecs", version = "0.8.0-dev" }
bevy_reflect = { path = "../bevy_reflect", version = "0.8.0-dev" }
bevy_render = { path = "../bevy_render", version = "0.8.0-dev" }
bevy_transform = { path = "../bevy_transform", version = "0.8.0-dev" }
bevy_utils = { path = "../bevy_utils", version = "0.8.0-dev" }
serde = { version = "1", features = ["derive"] }

View file

@ -0,0 +1,32 @@
use bevy_derive::{Deref, DerefMut};
use bevy_ecs::prelude::*;
use bevy_reflect::{Reflect, ReflectDeserialize};
use bevy_render::{color::Color, extract_resource::ExtractResource};
use serde::{Deserialize, Serialize};
#[derive(Reflect, Serialize, Deserialize, Clone, Debug)]
#[reflect_value(Serialize, Deserialize)]
pub enum ClearColorConfig {
Default,
Custom(Color),
None,
}
impl Default for ClearColorConfig {
fn default() -> Self {
ClearColorConfig::Default
}
}
/// When used as a resource, sets the color that is used to clear the screen between frames.
///
/// This color appears as the "background" color for simple apps, when
/// there are portions of the screen with nothing rendered.
#[derive(Component, Clone, Debug, Deref, DerefMut, ExtractResource)]
pub struct ClearColor(pub Color);
impl Default for ClearColor {
fn default() -> Self {
Self(Color::rgb(0.4, 0.4, 0.4))
}
}

View file

@ -1,128 +0,0 @@
use std::collections::HashSet;
use crate::{ClearColor, RenderTargetClearColors};
use bevy_ecs::prelude::*;
use bevy_render::{
camera::{ExtractedCamera, RenderTarget},
prelude::Image,
render_asset::RenderAssets,
render_graph::{Node, NodeRunError, RenderGraphContext, SlotInfo},
render_resource::{
LoadOp, Operations, RenderPassColorAttachment, RenderPassDepthStencilAttachment,
RenderPassDescriptor,
},
renderer::RenderContext,
view::{ExtractedView, ExtractedWindows, ViewDepthTexture, ViewTarget},
};
pub struct ClearPassNode {
query: QueryState<
(
&'static ViewTarget,
Option<&'static ViewDepthTexture>,
Option<&'static ExtractedCamera>,
),
With<ExtractedView>,
>,
}
impl ClearPassNode {
pub fn new(world: &mut World) -> Self {
Self {
query: QueryState::new(world),
}
}
}
impl Node for ClearPassNode {
fn input(&self) -> Vec<SlotInfo> {
vec![]
}
fn update(&mut self, world: &mut World) {
self.query.update_archetypes(world);
}
fn run(
&self,
_graph: &mut RenderGraphContext,
render_context: &mut RenderContext,
world: &World,
) -> Result<(), NodeRunError> {
let mut cleared_targets = HashSet::new();
let clear_color = world.resource::<ClearColor>();
let render_target_clear_colors = world.resource::<RenderTargetClearColors>();
// This gets all ViewTargets and ViewDepthTextures and clears its attachments
// TODO: This has the potential to clear the same target multiple times, if there
// are multiple views drawing to the same target. This should be fixed when we make
// clearing happen on "render targets" instead of "views" (see the TODO below for more context).
for (target, depth, camera) in self.query.iter_manual(world) {
let mut color = &clear_color.0;
if let Some(camera) = camera {
cleared_targets.insert(&camera.target);
if let Some(target_color) = render_target_clear_colors.get(&camera.target) {
color = target_color;
}
}
let pass_descriptor = RenderPassDescriptor {
label: Some("clear_pass"),
color_attachments: &[target.get_color_attachment(Operations {
load: LoadOp::Clear((*color).into()),
store: true,
})],
depth_stencil_attachment: depth.map(|depth| RenderPassDepthStencilAttachment {
view: &depth.view,
depth_ops: Some(Operations {
load: LoadOp::Clear(0.0),
store: true,
}),
stencil_ops: None,
}),
};
render_context
.command_encoder
.begin_render_pass(&pass_descriptor);
}
// TODO: This is a hack to ensure we don't call present() on frames without any work,
// which will cause panics. The real fix here is to clear "render targets" directly
// instead of "views". This should be removed once full RenderTargets are implemented.
let windows = world.resource::<ExtractedWindows>();
let images = world.resource::<RenderAssets<Image>>();
for target in render_target_clear_colors.colors.keys().cloned().chain(
windows
.values()
.map(|window| RenderTarget::Window(window.id)),
) {
// skip windows that have already been cleared
if cleared_targets.contains(&target) {
continue;
}
let pass_descriptor = RenderPassDescriptor {
label: Some("clear_pass"),
color_attachments: &[RenderPassColorAttachment {
view: target.get_texture_view(windows, images).unwrap(),
resolve_target: None,
ops: Operations {
load: LoadOp::Clear(
(*render_target_clear_colors
.get(&target)
.unwrap_or(&clear_color.0))
.into(),
),
store: true,
},
}],
depth_stencil_attachment: None,
};
render_context
.command_encoder
.begin_render_pass(&pass_descriptor);
}
Ok(())
}
}

View file

@ -1,20 +0,0 @@
use bevy_ecs::world::World;
use bevy_render::{
render_graph::{Node, NodeRunError, RenderGraphContext},
renderer::RenderContext,
};
pub struct ClearPassDriverNode;
impl Node for ClearPassDriverNode {
fn run(
&self,
graph: &mut RenderGraphContext,
_render_context: &mut RenderContext,
_world: &World,
) -> Result<(), NodeRunError> {
graph.run_sub_graph(crate::clear_graph::NAME, vec![])?;
Ok(())
}
}

View file

@ -0,0 +1,82 @@
use crate::clear_color::ClearColorConfig;
use bevy_ecs::{prelude::*, query::QueryItem};
use bevy_reflect::Reflect;
use bevy_render::{
camera::{
Camera, CameraProjection, CameraRenderGraph, DepthCalculation, OrthographicProjection,
},
extract_component::ExtractComponent,
primitives::Frustum,
view::VisibleEntities,
};
use bevy_transform::prelude::{GlobalTransform, Transform};
#[derive(Component, Default, Reflect, Clone)]
#[reflect(Component)]
pub struct Camera2d {
pub clear_color: ClearColorConfig,
}
impl ExtractComponent for Camera2d {
type Query = &'static Self;
type Filter = With<Camera>;
fn extract_component(item: QueryItem<Self::Query>) -> Self {
item.clone()
}
}
#[derive(Bundle)]
pub struct Camera2dBundle {
pub camera: Camera,
pub camera_render_graph: CameraRenderGraph,
pub projection: OrthographicProjection,
pub visible_entities: VisibleEntities,
pub frustum: Frustum,
pub transform: Transform,
pub global_transform: GlobalTransform,
pub camera_2d: Camera2d,
}
impl Default for Camera2dBundle {
fn default() -> Self {
Self::new_with_far(1000.0)
}
}
impl Camera2dBundle {
/// Create an orthographic projection camera with a custom Z position.
///
/// The camera is placed at `Z=far-0.1`, looking toward the world origin `(0,0,0)`.
/// Its orthographic projection extends from `0.0` to `-far` in camera view space,
/// corresponding to `Z=far-0.1` (closest to camera) to `Z=-0.1` (furthest away from
/// camera) in world space.
pub fn new_with_far(far: f32) -> Self {
// we want 0 to be "closest" and +far to be "farthest" in 2d, so we offset
// the camera's translation by far and use a right handed coordinate system
let projection = OrthographicProjection {
far,
depth_calculation: DepthCalculation::ZDifference,
..Default::default()
};
let transform = Transform::from_xyz(0.0, 0.0, far - 0.1);
let view_projection =
projection.get_projection_matrix() * transform.compute_matrix().inverse();
let frustum = Frustum::from_view_projection(
&view_projection,
&transform.translation,
&transform.back(),
projection.far(),
);
Self {
camera_render_graph: CameraRenderGraph::new(crate::core_2d::graph::NAME),
projection,
visible_entities: VisibleEntities::default(),
frustum,
transform,
global_transform: Default::default(),
camera: Camera::default(),
camera_2d: Camera2d::default(),
}
}
}

View file

@ -1,4 +1,7 @@
use crate::Transparent2d;
use crate::{
clear_color::{ClearColor, ClearColorConfig},
core_2d::{camera_2d::Camera2d, Transparent2d},
};
use bevy_ecs::prelude::*;
use bevy_render::{
render_graph::{Node, NodeRunError, RenderGraphContext, SlotInfo, SlotType},
@ -9,8 +12,14 @@ use bevy_render::{
};
pub struct MainPass2dNode {
query:
QueryState<(&'static RenderPhase<Transparent2d>, &'static ViewTarget), With<ExtractedView>>,
query: QueryState<
(
&'static RenderPhase<Transparent2d>,
&'static ViewTarget,
&'static Camera2d,
),
With<ExtractedView>,
>,
}
impl MainPass2dNode {
@ -18,7 +27,7 @@ impl MainPass2dNode {
pub fn new(world: &mut World) -> Self {
Self {
query: QueryState::new(world),
query: world.query_filtered(),
}
}
}
@ -39,20 +48,24 @@ impl Node for MainPass2dNode {
world: &World,
) -> Result<(), NodeRunError> {
let view_entity = graph.get_input_entity(Self::IN_VIEW)?;
// If there is no view entity, do not try to process the render phase for the view
let (transparent_phase, target) = match self.query.get_manual(world, view_entity) {
Ok(it) => it,
_ => return Ok(()),
};
if transparent_phase.items.is_empty() {
return Ok(());
}
let (transparent_phase, target, camera_2d) =
if let Ok(result) = self.query.get_manual(world, view_entity) {
result
} else {
// no target
return Ok(());
};
let pass_descriptor = RenderPassDescriptor {
label: Some("main_pass_2d"),
color_attachments: &[target.get_color_attachment(Operations {
load: LoadOp::Load,
load: match camera_2d.clear_color {
ClearColorConfig::Default => {
LoadOp::Clear(world.resource::<ClearColor>().0.into())
}
ClearColorConfig::Custom(color) => LoadOp::Clear(color.into()),
ClearColorConfig::None => LoadOp::Load,
},
store: true,
})],
depth_stencil_attachment: None,

View file

@ -0,0 +1,130 @@
mod camera_2d;
mod main_pass_2d_node;
pub mod graph {
pub const NAME: &str = "core_2d";
pub mod input {
pub const VIEW_ENTITY: &str = "view_entity";
}
pub mod node {
pub const MAIN_PASS: &str = "main_pass";
}
}
pub use camera_2d::*;
pub use main_pass_2d_node::*;
use bevy_app::{App, Plugin};
use bevy_ecs::prelude::*;
use bevy_render::{
camera::Camera,
extract_component::ExtractComponentPlugin,
render_graph::{RenderGraph, SlotInfo, SlotType},
render_phase::{
batch_phase_system, sort_phase_system, BatchedPhaseItem, CachedRenderPipelinePhaseItem,
DrawFunctionId, DrawFunctions, EntityPhaseItem, PhaseItem, RenderPhase,
},
render_resource::CachedRenderPipelineId,
RenderApp, RenderStage,
};
use bevy_utils::FloatOrd;
use std::ops::Range;
pub struct Core2dPlugin;
impl Plugin for Core2dPlugin {
fn build(&self, app: &mut App) {
app.register_type::<Camera2d>()
.add_plugin(ExtractComponentPlugin::<Camera2d>::default());
let render_app = match app.get_sub_app_mut(RenderApp) {
Ok(render_app) => render_app,
Err(_) => return,
};
render_app
.init_resource::<DrawFunctions<Transparent2d>>()
.add_system_to_stage(RenderStage::Extract, extract_core_2d_camera_phases)
.add_system_to_stage(RenderStage::PhaseSort, sort_phase_system::<Transparent2d>)
.add_system_to_stage(RenderStage::PhaseSort, batch_phase_system::<Transparent2d>);
let pass_node_2d = MainPass2dNode::new(&mut render_app.world);
let mut graph = render_app.world.resource_mut::<RenderGraph>();
let mut draw_2d_graph = RenderGraph::default();
draw_2d_graph.add_node(graph::node::MAIN_PASS, pass_node_2d);
let input_node_id = draw_2d_graph.set_input(vec![SlotInfo::new(
graph::input::VIEW_ENTITY,
SlotType::Entity,
)]);
draw_2d_graph
.add_slot_edge(
input_node_id,
graph::input::VIEW_ENTITY,
graph::node::MAIN_PASS,
MainPass2dNode::IN_VIEW,
)
.unwrap();
graph.add_sub_graph(graph::NAME, draw_2d_graph);
}
}
pub struct Transparent2d {
pub sort_key: FloatOrd,
pub entity: Entity,
pub pipeline: CachedRenderPipelineId,
pub draw_function: DrawFunctionId,
/// Range in the vertex buffer of this item
pub batch_range: Option<Range<u32>>,
}
impl PhaseItem for Transparent2d {
type SortKey = FloatOrd;
#[inline]
fn sort_key(&self) -> Self::SortKey {
self.sort_key
}
#[inline]
fn draw_function(&self) -> DrawFunctionId {
self.draw_function
}
}
impl EntityPhaseItem for Transparent2d {
#[inline]
fn entity(&self) -> Entity {
self.entity
}
}
impl CachedRenderPipelinePhaseItem for Transparent2d {
#[inline]
fn cached_pipeline(&self) -> CachedRenderPipelineId {
self.pipeline
}
}
impl BatchedPhaseItem for Transparent2d {
fn batch_range(&self) -> &Option<Range<u32>> {
&self.batch_range
}
fn batch_range_mut(&mut self) -> &mut Option<Range<u32>> {
&mut self.batch_range
}
}
pub fn extract_core_2d_camera_phases(
mut commands: Commands,
cameras_2d: Query<(Entity, &Camera), With<Camera2d>>,
) {
for (entity, camera) in cameras_2d.iter() {
if camera.is_active {
commands
.get_or_spawn(entity)
.insert(RenderPhase::<Transparent2d>::default());
}
}
}

View file

@ -0,0 +1,53 @@
use crate::clear_color::ClearColorConfig;
use bevy_ecs::{prelude::*, query::QueryItem};
use bevy_reflect::Reflect;
use bevy_render::{
camera::{Camera, CameraRenderGraph, Projection},
extract_component::ExtractComponent,
primitives::Frustum,
view::VisibleEntities,
};
use bevy_transform::prelude::{GlobalTransform, Transform};
#[derive(Component, Default, Reflect, Clone)]
#[reflect(Component)]
pub struct Camera3d {
pub clear_color: ClearColorConfig,
}
impl ExtractComponent for Camera3d {
type Query = &'static Self;
type Filter = With<Camera>;
fn extract_component(item: QueryItem<Self::Query>) -> Self {
item.clone()
}
}
#[derive(Bundle)]
pub struct Camera3dBundle {
pub camera: Camera,
pub camera_render_graph: CameraRenderGraph,
pub projection: Projection,
pub visible_entities: VisibleEntities,
pub frustum: Frustum,
pub transform: Transform,
pub global_transform: GlobalTransform,
pub camera_3d: Camera3d,
}
// NOTE: ideally Perspective and Orthographic defaults can share the same impl, but sadly it breaks rust's type inference
impl Default for Camera3dBundle {
fn default() -> Self {
Self {
camera_render_graph: CameraRenderGraph::new(crate::core_3d::graph::NAME),
camera: Default::default(),
projection: Default::default(),
visible_entities: Default::default(),
frustum: Default::default(),
transform: Default::default(),
global_transform: Default::default(),
camera_3d: Default::default(),
}
}
}

View file

@ -1,4 +1,7 @@
use crate::{AlphaMask3d, Opaque3d, Transparent3d};
use crate::{
clear_color::{ClearColor, ClearColorConfig},
core_3d::{AlphaMask3d, Camera3d, Opaque3d, Transparent3d},
};
use bevy_ecs::prelude::*;
use bevy_render::{
render_graph::{Node, NodeRunError, RenderGraphContext, SlotInfo, SlotType},
@ -16,6 +19,7 @@ pub struct MainPass3dNode {
&'static RenderPhase<Opaque3d>,
&'static RenderPhase<AlphaMask3d>,
&'static RenderPhase<Transparent3d>,
&'static Camera3d,
&'static ViewTarget,
&'static ViewDepthTexture,
),
@ -28,7 +32,7 @@ impl MainPass3dNode {
pub fn new(world: &mut World) -> Self {
Self {
query: QueryState::new(world),
query: world.query_filtered(),
}
}
}
@ -49,13 +53,16 @@ impl Node for MainPass3dNode {
world: &World,
) -> Result<(), NodeRunError> {
let view_entity = graph.get_input_entity(Self::IN_VIEW)?;
let (opaque_phase, alpha_mask_phase, transparent_phase, target, depth) =
let (opaque_phase, alpha_mask_phase, transparent_phase, camera_3d, target, depth) =
match self.query.get_manual(world, view_entity) {
Ok(query) => query,
Err(_) => return Ok(()), // No window
Err(_) => {
return Ok(());
} // No window
};
if !opaque_phase.items.is_empty() {
// Always run opaque pass to ensure screen is cleared
{
// Run the opaque pass, sorted front-to-back
// NOTE: Scoped to drop the mutable borrow of render_context
#[cfg(feature = "trace")]
@ -65,14 +72,21 @@ impl Node for MainPass3dNode {
// NOTE: The opaque pass loads the color
// buffer as well as writing to it.
color_attachments: &[target.get_color_attachment(Operations {
load: LoadOp::Load,
load: match camera_3d.clear_color {
ClearColorConfig::Default => {
LoadOp::Clear(world.resource::<ClearColor>().0.into())
}
ClearColorConfig::Custom(color) => LoadOp::Clear(color.into()),
ClearColorConfig::None => LoadOp::Load,
},
store: true,
})],
depth_stencil_attachment: Some(RenderPassDepthStencilAttachment {
view: &depth.view,
// NOTE: The opaque main pass loads the depth buffer and possibly overwrites it
depth_ops: Some(Operations {
load: LoadOp::Load,
// NOTE: 0.0 is the far plane due to bevy's use of reverse-z projections
load: LoadOp::Clear(0.0),
store: true,
}),
stencil_ops: None,

View file

@ -0,0 +1,250 @@
mod camera_3d;
mod main_pass_3d_node;
pub mod graph {
pub const NAME: &str = "core_3d";
pub mod input {
pub const VIEW_ENTITY: &str = "view_entity";
}
pub mod node {
pub const MAIN_PASS: &str = "main_pass";
}
}
pub use camera_3d::*;
pub use main_pass_3d_node::*;
use bevy_app::{App, Plugin};
use bevy_ecs::prelude::*;
use bevy_render::{
camera::{Camera, ExtractedCamera},
extract_component::ExtractComponentPlugin,
prelude::Msaa,
render_graph::{RenderGraph, SlotInfo, SlotType},
render_phase::{
sort_phase_system, CachedRenderPipelinePhaseItem, DrawFunctionId, DrawFunctions,
EntityPhaseItem, PhaseItem, RenderPhase,
},
render_resource::{
CachedRenderPipelineId, Extent3d, TextureDescriptor, TextureDimension, TextureFormat,
TextureUsages,
},
renderer::RenderDevice,
texture::TextureCache,
view::{ExtractedView, ViewDepthTexture},
RenderApp, RenderStage,
};
use bevy_utils::{FloatOrd, HashMap};
pub struct Core3dPlugin;
impl Plugin for Core3dPlugin {
fn build(&self, app: &mut App) {
app.register_type::<Camera3d>()
.add_plugin(ExtractComponentPlugin::<Camera3d>::default());
let render_app = match app.get_sub_app_mut(RenderApp) {
Ok(render_app) => render_app,
Err(_) => return,
};
render_app
.init_resource::<DrawFunctions<Opaque3d>>()
.init_resource::<DrawFunctions<AlphaMask3d>>()
.init_resource::<DrawFunctions<Transparent3d>>()
.add_system_to_stage(RenderStage::Extract, extract_core_3d_camera_phases)
.add_system_to_stage(RenderStage::Prepare, prepare_core_3d_views_system)
.add_system_to_stage(RenderStage::PhaseSort, sort_phase_system::<Opaque3d>)
.add_system_to_stage(RenderStage::PhaseSort, sort_phase_system::<AlphaMask3d>)
.add_system_to_stage(RenderStage::PhaseSort, sort_phase_system::<Transparent3d>);
let pass_node_3d = MainPass3dNode::new(&mut render_app.world);
let mut graph = render_app.world.resource_mut::<RenderGraph>();
let mut draw_3d_graph = RenderGraph::default();
draw_3d_graph.add_node(graph::node::MAIN_PASS, pass_node_3d);
let input_node_id = draw_3d_graph.set_input(vec![SlotInfo::new(
graph::input::VIEW_ENTITY,
SlotType::Entity,
)]);
draw_3d_graph
.add_slot_edge(
input_node_id,
graph::input::VIEW_ENTITY,
graph::node::MAIN_PASS,
MainPass3dNode::IN_VIEW,
)
.unwrap();
graph.add_sub_graph(graph::NAME, draw_3d_graph);
}
}
pub struct Opaque3d {
pub distance: f32,
pub pipeline: CachedRenderPipelineId,
pub entity: Entity,
pub draw_function: DrawFunctionId,
}
impl PhaseItem for Opaque3d {
type SortKey = FloatOrd;
#[inline]
fn sort_key(&self) -> Self::SortKey {
FloatOrd(self.distance)
}
#[inline]
fn draw_function(&self) -> DrawFunctionId {
self.draw_function
}
}
impl EntityPhaseItem for Opaque3d {
#[inline]
fn entity(&self) -> Entity {
self.entity
}
}
impl CachedRenderPipelinePhaseItem for Opaque3d {
#[inline]
fn cached_pipeline(&self) -> CachedRenderPipelineId {
self.pipeline
}
}
pub struct AlphaMask3d {
pub distance: f32,
pub pipeline: CachedRenderPipelineId,
pub entity: Entity,
pub draw_function: DrawFunctionId,
}
impl PhaseItem for AlphaMask3d {
type SortKey = FloatOrd;
#[inline]
fn sort_key(&self) -> Self::SortKey {
FloatOrd(self.distance)
}
#[inline]
fn draw_function(&self) -> DrawFunctionId {
self.draw_function
}
}
impl EntityPhaseItem for AlphaMask3d {
#[inline]
fn entity(&self) -> Entity {
self.entity
}
}
impl CachedRenderPipelinePhaseItem for AlphaMask3d {
#[inline]
fn cached_pipeline(&self) -> CachedRenderPipelineId {
self.pipeline
}
}
pub struct Transparent3d {
pub distance: f32,
pub pipeline: CachedRenderPipelineId,
pub entity: Entity,
pub draw_function: DrawFunctionId,
}
impl PhaseItem for Transparent3d {
type SortKey = FloatOrd;
#[inline]
fn sort_key(&self) -> Self::SortKey {
FloatOrd(self.distance)
}
#[inline]
fn draw_function(&self) -> DrawFunctionId {
self.draw_function
}
}
impl EntityPhaseItem for Transparent3d {
#[inline]
fn entity(&self) -> Entity {
self.entity
}
}
impl CachedRenderPipelinePhaseItem for Transparent3d {
#[inline]
fn cached_pipeline(&self) -> CachedRenderPipelineId {
self.pipeline
}
}
pub fn extract_core_3d_camera_phases(
mut commands: Commands,
cameras_3d: Query<(Entity, &Camera), With<Camera3d>>,
) {
for (entity, camera) in cameras_3d.iter() {
if camera.is_active {
commands.get_or_spawn(entity).insert_bundle((
RenderPhase::<Opaque3d>::default(),
RenderPhase::<AlphaMask3d>::default(),
RenderPhase::<Transparent3d>::default(),
));
}
}
}
pub fn prepare_core_3d_views_system(
mut commands: Commands,
mut texture_cache: ResMut<TextureCache>,
msaa: Res<Msaa>,
render_device: Res<RenderDevice>,
views_3d: Query<
(Entity, &ExtractedView, Option<&ExtractedCamera>),
(
With<RenderPhase<Opaque3d>>,
With<RenderPhase<AlphaMask3d>>,
With<RenderPhase<Transparent3d>>,
),
>,
) {
let mut textures = HashMap::default();
for (entity, view, camera) in views_3d.iter() {
let mut get_cached_texture = || {
texture_cache.get(
&render_device,
TextureDescriptor {
label: Some("view_depth_texture"),
size: Extent3d {
depth_or_array_layers: 1,
width: view.width as u32,
height: view.height as u32,
},
mip_level_count: 1,
sample_count: msaa.samples,
dimension: TextureDimension::D2,
format: TextureFormat::Depth32Float, /* PERF: vulkan docs recommend using 24
* bit depth for better performance */
usage: TextureUsages::RENDER_ATTACHMENT,
},
)
};
let cached_texture = if let Some(camera) = camera {
textures
.entry(camera.target.clone())
.or_insert_with(get_cached_texture)
.clone()
} else {
get_cached_texture()
};
commands.entity(entity).insert(ViewDepthTexture {
texture: cached_texture.texture,
view: cached_texture.default_view,
});
}
}

View file

@ -1,419 +1,28 @@
mod clear_pass;
mod clear_pass_driver;
mod main_pass_2d;
mod main_pass_3d;
mod main_pass_driver;
pub mod clear_color;
pub mod core_2d;
pub mod core_3d;
pub mod prelude {
#[doc(hidden)]
pub use crate::ClearColor;
pub use crate::{
clear_color::ClearColor,
core_2d::{Camera2d, Camera2dBundle},
core_3d::{Camera3d, Camera3dBundle},
};
}
use bevy_utils::HashMap;
pub use clear_pass::*;
pub use clear_pass_driver::*;
pub use main_pass_2d::*;
pub use main_pass_3d::*;
pub use main_pass_driver::*;
use std::ops::Range;
use crate::{clear_color::ClearColor, core_2d::Core2dPlugin, core_3d::Core3dPlugin};
use bevy_app::{App, Plugin};
use bevy_ecs::prelude::*;
use bevy_render::{
camera::{ActiveCamera, Camera2d, Camera3d, ExtractedCamera, RenderTarget},
color::Color,
extract_resource::{ExtractResource, ExtractResourcePlugin},
render_graph::{EmptyNode, RenderGraph, SlotInfo, SlotType},
render_phase::{
batch_phase_system, sort_phase_system, BatchedPhaseItem, CachedRenderPipelinePhaseItem,
DrawFunctionId, DrawFunctions, EntityPhaseItem, PhaseItem, RenderPhase,
},
render_resource::*,
renderer::RenderDevice,
texture::TextureCache,
view::{ExtractedView, Msaa, ViewDepthTexture},
RenderApp, RenderStage,
};
use bevy_utils::FloatOrd;
/// When used as a resource, sets the color that is used to clear the screen between frames.
///
/// This color appears as the "background" color for simple apps, when
/// there are portions of the screen with nothing rendered.
#[derive(Clone, Debug, ExtractResource)]
pub struct ClearColor(pub Color);
impl Default for ClearColor {
fn default() -> Self {
Self(Color::rgb(0.4, 0.4, 0.4))
}
}
#[derive(Clone, Debug, Default, ExtractResource)]
pub struct RenderTargetClearColors {
colors: HashMap<RenderTarget, Color>,
}
impl RenderTargetClearColors {
pub fn get(&self, target: &RenderTarget) -> Option<&Color> {
self.colors.get(target)
}
pub fn insert(&mut self, target: RenderTarget, color: Color) {
self.colors.insert(target, color);
}
}
// Plugins that contribute to the RenderGraph should use the following label conventions:
// 1. Graph modules should have a NAME, input module, and node module (where relevant)
// 2. The "top level" graph is the plugin module root. Just add things like `pub mod node` directly under the plugin module
// 3. "sub graph" modules should be nested beneath their parent graph module
pub mod node {
pub const MAIN_PASS_DEPENDENCIES: &str = "main_pass_dependencies";
pub const MAIN_PASS_DRIVER: &str = "main_pass_driver";
pub const CLEAR_PASS_DRIVER: &str = "clear_pass_driver";
}
pub mod draw_2d_graph {
pub const NAME: &str = "draw_2d";
pub mod input {
pub const VIEW_ENTITY: &str = "view_entity";
}
pub mod node {
pub const MAIN_PASS: &str = "main_pass";
}
}
pub mod draw_3d_graph {
pub const NAME: &str = "draw_3d";
pub mod input {
pub const VIEW_ENTITY: &str = "view_entity";
}
pub mod node {
pub const MAIN_PASS: &str = "main_pass";
}
}
pub mod clear_graph {
pub const NAME: &str = "clear";
pub mod node {
pub const CLEAR_PASS: &str = "clear_pass";
}
}
use bevy_render::extract_resource::ExtractResourcePlugin;
#[derive(Default)]
pub struct CorePipelinePlugin;
#[derive(Debug, Hash, PartialEq, Eq, Clone, SystemLabel)]
pub enum CorePipelineRenderSystems {
SortTransparent2d,
}
impl Plugin for CorePipelinePlugin {
fn build(&self, app: &mut App) {
app.init_resource::<ClearColor>()
.init_resource::<RenderTargetClearColors>()
.add_plugin(ExtractResourcePlugin::<ClearColor>::default())
.add_plugin(ExtractResourcePlugin::<RenderTargetClearColors>::default());
let render_app = match app.get_sub_app_mut(RenderApp) {
Ok(render_app) => render_app,
Err(_) => return,
};
render_app
.init_resource::<DrawFunctions<Transparent2d>>()
.init_resource::<DrawFunctions<Opaque3d>>()
.init_resource::<DrawFunctions<AlphaMask3d>>()
.init_resource::<DrawFunctions<Transparent3d>>()
.add_system_to_stage(RenderStage::Extract, extract_core_pipeline_camera_phases)
.add_system_to_stage(RenderStage::Prepare, prepare_core_views_system)
.add_system_to_stage(
RenderStage::PhaseSort,
sort_phase_system::<Transparent2d>
.label(CorePipelineRenderSystems::SortTransparent2d),
)
.add_system_to_stage(
RenderStage::PhaseSort,
batch_phase_system::<Transparent2d>
.after(CorePipelineRenderSystems::SortTransparent2d),
)
.add_system_to_stage(RenderStage::PhaseSort, sort_phase_system::<Opaque3d>)
.add_system_to_stage(RenderStage::PhaseSort, sort_phase_system::<AlphaMask3d>)
.add_system_to_stage(RenderStage::PhaseSort, sort_phase_system::<Transparent3d>);
let clear_pass_node = ClearPassNode::new(&mut render_app.world);
let pass_node_2d = MainPass2dNode::new(&mut render_app.world);
let pass_node_3d = MainPass3dNode::new(&mut render_app.world);
let mut graph = render_app.world.resource_mut::<RenderGraph>();
let mut draw_2d_graph = RenderGraph::default();
draw_2d_graph.add_node(draw_2d_graph::node::MAIN_PASS, pass_node_2d);
let input_node_id = draw_2d_graph.set_input(vec![SlotInfo::new(
draw_2d_graph::input::VIEW_ENTITY,
SlotType::Entity,
)]);
draw_2d_graph
.add_slot_edge(
input_node_id,
draw_2d_graph::input::VIEW_ENTITY,
draw_2d_graph::node::MAIN_PASS,
MainPass2dNode::IN_VIEW,
)
.unwrap();
graph.add_sub_graph(draw_2d_graph::NAME, draw_2d_graph);
let mut draw_3d_graph = RenderGraph::default();
draw_3d_graph.add_node(draw_3d_graph::node::MAIN_PASS, pass_node_3d);
let input_node_id = draw_3d_graph.set_input(vec![SlotInfo::new(
draw_3d_graph::input::VIEW_ENTITY,
SlotType::Entity,
)]);
draw_3d_graph
.add_slot_edge(
input_node_id,
draw_3d_graph::input::VIEW_ENTITY,
draw_3d_graph::node::MAIN_PASS,
MainPass3dNode::IN_VIEW,
)
.unwrap();
graph.add_sub_graph(draw_3d_graph::NAME, draw_3d_graph);
let mut clear_graph = RenderGraph::default();
clear_graph.add_node(clear_graph::node::CLEAR_PASS, clear_pass_node);
graph.add_sub_graph(clear_graph::NAME, clear_graph);
graph.add_node(node::MAIN_PASS_DEPENDENCIES, EmptyNode);
graph.add_node(node::MAIN_PASS_DRIVER, MainPassDriverNode);
graph
.add_node_edge(node::MAIN_PASS_DEPENDENCIES, node::MAIN_PASS_DRIVER)
.unwrap();
graph.add_node(node::CLEAR_PASS_DRIVER, ClearPassDriverNode);
graph
.add_node_edge(node::CLEAR_PASS_DRIVER, node::MAIN_PASS_DRIVER)
.unwrap();
}
}
pub struct Transparent2d {
pub sort_key: FloatOrd,
pub entity: Entity,
pub pipeline: CachedRenderPipelineId,
pub draw_function: DrawFunctionId,
/// Range in the vertex buffer of this item
pub batch_range: Option<Range<u32>>,
}
impl PhaseItem for Transparent2d {
type SortKey = FloatOrd;
#[inline]
fn sort_key(&self) -> Self::SortKey {
self.sort_key
}
#[inline]
fn draw_function(&self) -> DrawFunctionId {
self.draw_function
}
}
impl EntityPhaseItem for Transparent2d {
#[inline]
fn entity(&self) -> Entity {
self.entity
}
}
impl CachedRenderPipelinePhaseItem for Transparent2d {
#[inline]
fn cached_pipeline(&self) -> CachedRenderPipelineId {
self.pipeline
}
}
impl BatchedPhaseItem for Transparent2d {
fn batch_range(&self) -> &Option<Range<u32>> {
&self.batch_range
}
fn batch_range_mut(&mut self) -> &mut Option<Range<u32>> {
&mut self.batch_range
}
}
pub struct Opaque3d {
pub distance: f32,
pub pipeline: CachedRenderPipelineId,
pub entity: Entity,
pub draw_function: DrawFunctionId,
}
impl PhaseItem for Opaque3d {
type SortKey = FloatOrd;
#[inline]
fn sort_key(&self) -> Self::SortKey {
FloatOrd(self.distance)
}
#[inline]
fn draw_function(&self) -> DrawFunctionId {
self.draw_function
}
}
impl EntityPhaseItem for Opaque3d {
#[inline]
fn entity(&self) -> Entity {
self.entity
}
}
impl CachedRenderPipelinePhaseItem for Opaque3d {
#[inline]
fn cached_pipeline(&self) -> CachedRenderPipelineId {
self.pipeline
}
}
pub struct AlphaMask3d {
pub distance: f32,
pub pipeline: CachedRenderPipelineId,
pub entity: Entity,
pub draw_function: DrawFunctionId,
}
impl PhaseItem for AlphaMask3d {
type SortKey = FloatOrd;
#[inline]
fn sort_key(&self) -> Self::SortKey {
FloatOrd(self.distance)
}
#[inline]
fn draw_function(&self) -> DrawFunctionId {
self.draw_function
}
}
impl EntityPhaseItem for AlphaMask3d {
#[inline]
fn entity(&self) -> Entity {
self.entity
}
}
impl CachedRenderPipelinePhaseItem for AlphaMask3d {
#[inline]
fn cached_pipeline(&self) -> CachedRenderPipelineId {
self.pipeline
}
}
pub struct Transparent3d {
pub distance: f32,
pub pipeline: CachedRenderPipelineId,
pub entity: Entity,
pub draw_function: DrawFunctionId,
}
impl PhaseItem for Transparent3d {
type SortKey = FloatOrd;
#[inline]
fn sort_key(&self) -> Self::SortKey {
FloatOrd(self.distance)
}
#[inline]
fn draw_function(&self) -> DrawFunctionId {
self.draw_function
}
}
impl EntityPhaseItem for Transparent3d {
#[inline]
fn entity(&self) -> Entity {
self.entity
}
}
impl CachedRenderPipelinePhaseItem for Transparent3d {
#[inline]
fn cached_pipeline(&self) -> CachedRenderPipelineId {
self.pipeline
}
}
pub fn extract_core_pipeline_camera_phases(
mut commands: Commands,
active_2d: Res<ActiveCamera<Camera2d>>,
active_3d: Res<ActiveCamera<Camera3d>>,
) {
if let Some(entity) = active_2d.get() {
commands
.get_or_spawn(entity)
.insert(RenderPhase::<Transparent2d>::default());
}
if let Some(entity) = active_3d.get() {
commands.get_or_spawn(entity).insert_bundle((
RenderPhase::<Opaque3d>::default(),
RenderPhase::<AlphaMask3d>::default(),
RenderPhase::<Transparent3d>::default(),
));
}
}
pub fn prepare_core_views_system(
mut commands: Commands,
mut texture_cache: ResMut<TextureCache>,
msaa: Res<Msaa>,
render_device: Res<RenderDevice>,
views_3d: Query<
(Entity, &ExtractedView, Option<&ExtractedCamera>),
(
With<RenderPhase<Opaque3d>>,
With<RenderPhase<AlphaMask3d>>,
With<RenderPhase<Transparent3d>>,
),
>,
) {
let mut textures = HashMap::default();
for (entity, view, camera) in views_3d.iter() {
let mut get_cached_texture = || {
texture_cache.get(
&render_device,
TextureDescriptor {
label: Some("view_depth_texture"),
size: Extent3d {
depth_or_array_layers: 1,
width: view.width as u32,
height: view.height as u32,
},
mip_level_count: 1,
sample_count: msaa.samples,
dimension: TextureDimension::D2,
format: TextureFormat::Depth32Float, /* PERF: vulkan docs recommend using 24
* bit depth for better performance */
usage: TextureUsages::RENDER_ATTACHMENT,
},
)
};
let cached_texture = if let Some(camera) = camera {
textures
.entry(camera.target.clone())
.or_insert_with(get_cached_texture)
.clone()
} else {
get_cached_texture()
};
commands.entity(entity).insert(ViewDepthTexture {
texture: cached_texture.texture,
view: cached_texture.default_view,
});
.add_plugin(Core2dPlugin)
.add_plugin(Core3dPlugin);
}
}

View file

@ -1,33 +0,0 @@
use bevy_ecs::world::World;
use bevy_render::{
camera::{ActiveCamera, Camera2d, Camera3d},
render_graph::{Node, NodeRunError, RenderGraphContext, SlotValue},
renderer::RenderContext,
};
pub struct MainPassDriverNode;
impl Node for MainPassDriverNode {
fn run(
&self,
graph: &mut RenderGraphContext,
_render_context: &mut RenderContext,
world: &World,
) -> Result<(), NodeRunError> {
if let Some(camera_3d) = world.resource::<ActiveCamera<Camera3d>>().get() {
graph.run_sub_graph(
crate::draw_3d_graph::NAME,
vec![SlotValue::Entity(camera_3d)],
)?;
}
if let Some(camera_2d) = world.resource::<ActiveCamera<Camera2d>>().get() {
graph.run_sub_graph(
crate::draw_2d_graph::NAME,
vec![SlotValue::Entity(camera_2d)],
)?;
}
Ok(())
}
}

View file

@ -14,6 +14,7 @@ bevy_animation = { path = "../bevy_animation", version = "0.8.0-dev", optional =
bevy_app = { path = "../bevy_app", version = "0.8.0-dev" }
bevy_asset = { path = "../bevy_asset", version = "0.8.0-dev" }
bevy_core = { path = "../bevy_core", version = "0.8.0-dev" }
bevy_core_pipeline = { path = "../bevy_core_pipeline", version = "0.8.0-dev" }
bevy_ecs = { path = "../bevy_ecs", version = "0.8.0-dev" }
bevy_hierarchy = { path = "../bevy_hierarchy", version = "0.8.0-dev" }
bevy_log = { path = "../bevy_log", version = "0.8.0-dev" }

View file

@ -3,6 +3,7 @@ use bevy_asset::{
AssetIoError, AssetLoader, AssetPath, BoxedFuture, Handle, LoadContext, LoadedAsset,
};
use bevy_core::Name;
use bevy_core_pipeline::prelude::Camera3d;
use bevy_ecs::{entity::Entity, prelude::FromWorld, world::World};
use bevy_hierarchy::{BuildWorldChildren, WorldChildBuilder};
use bevy_log::warn;
@ -13,7 +14,7 @@ use bevy_pbr::{
};
use bevy_render::{
camera::{
Camera, Camera3d, CameraProjection, OrthographicProjection, PerspectiveProjection,
Camera, CameraRenderGraph, OrthographicProjection, PerspectiveProjection, Projection,
ScalingMode,
},
color::Color,
@ -459,6 +460,7 @@ async fn load_gltf<'a, 'b>(
let mut scenes = vec![];
let mut named_scenes = HashMap::default();
let mut active_camera_found = false;
for scene in gltf.scenes() {
let mut err = None;
let mut world = World::default();
@ -477,6 +479,7 @@ async fn load_gltf<'a, 'b>(
&buffer_data,
&mut node_index_to_entity_map,
&mut entity_to_skin_index_map,
&mut active_camera_found,
);
if result.is_err() {
err = Some(result);
@ -701,6 +704,7 @@ fn load_node(
buffer_data: &[Vec<u8>],
node_index_to_entity_map: &mut HashMap<usize, Entity>,
entity_to_skin_index_map: &mut HashMap<Entity, usize>,
active_camera_found: &mut bool,
) -> Result<(), GltfError> {
let transform = gltf_node.transform();
let mut gltf_error = None;
@ -718,14 +722,7 @@ fn load_node(
// create camera node
if let Some(camera) = gltf_node.camera() {
node.insert_bundle((
VisibleEntities {
..Default::default()
},
Frustum::default(),
));
match camera.projection() {
let projection = match camera.projection() {
gltf::camera::Projection::Orthographic(orthographic) => {
let xmag = orthographic.xmag();
let orthographic_projection: OrthographicProjection = OrthographicProjection {
@ -736,12 +733,7 @@ fn load_node(
..Default::default()
};
node.insert(Camera {
projection_matrix: orthographic_projection.get_projection_matrix(),
..Default::default()
});
node.insert(orthographic_projection);
node.insert(Camera3d);
Projection::Orthographic(orthographic_projection)
}
gltf::camera::Projection::Perspective(perspective) => {
let mut perspective_projection: PerspectiveProjection = PerspectiveProjection {
@ -755,14 +747,23 @@ fn load_node(
if let Some(aspect_ratio) = perspective.aspect_ratio() {
perspective_projection.aspect_ratio = aspect_ratio;
}
node.insert(Camera {
projection_matrix: perspective_projection.get_projection_matrix(),
..Default::default()
});
node.insert(perspective_projection);
node.insert(Camera3d);
Projection::Perspective(perspective_projection)
}
}
};
node.insert_bundle((
projection,
Camera {
is_active: !*active_camera_found,
..Default::default()
},
VisibleEntities::default(),
Frustum::default(),
Camera3d::default(),
CameraRenderGraph::new(bevy_core_pipeline::core_3d::graph::NAME),
));
*active_camera_found = true;
}
// Map node index to entity
@ -875,6 +876,7 @@ fn load_node(
buffer_data,
node_index_to_entity_map,
entity_to_skin_index_map,
active_camera_found,
) {
gltf_error = Some(err);
return;

View file

@ -39,6 +39,7 @@ use bevy_asset::{load_internal_asset, Assets, Handle, HandleUntyped};
use bevy_ecs::prelude::*;
use bevy_reflect::TypeUuid;
use bevy_render::{
camera::CameraUpdateSystem,
extract_resource::ExtractResourcePlugin,
prelude::Color,
render_graph::RenderGraph,
@ -107,6 +108,7 @@ impl Plugin for PbrPlugin {
assign_lights_to_clusters
.label(SimulationLightSystems::AssignLightsToClusters)
.after(TransformSystem::TransformPropagate)
.after(CameraUpdateSystem)
.after(ModifiesWindows),
)
.add_system_to_stage(
@ -192,19 +194,19 @@ impl Plugin for PbrPlugin {
render_app.add_render_command::<Shadow, DrawShadowMesh>();
let mut graph = render_app.world.resource_mut::<RenderGraph>();
let draw_3d_graph = graph
.get_sub_graph_mut(bevy_core_pipeline::draw_3d_graph::NAME)
.get_sub_graph_mut(bevy_core_pipeline::core_3d::graph::NAME)
.unwrap();
draw_3d_graph.add_node(draw_3d_graph::node::SHADOW_PASS, shadow_pass_node);
draw_3d_graph
.add_node_edge(
draw_3d_graph::node::SHADOW_PASS,
bevy_core_pipeline::draw_3d_graph::node::MAIN_PASS,
bevy_core_pipeline::core_3d::graph::node::MAIN_PASS,
)
.unwrap();
draw_3d_graph
.add_slot_edge(
draw_3d_graph.input_node().unwrap().id,
bevy_core_pipeline::draw_3d_graph::input::VIEW_ENTITY,
bevy_core_pipeline::core_3d::graph::input::VIEW_ENTITY,
draw_3d_graph::node::SHADOW_PASS,
ShadowPassNode::IN_VIEW,
)

View file

@ -1,6 +1,5 @@
use std::collections::HashSet;
use bevy_asset::Assets;
use bevy_ecs::prelude::*;
use bevy_math::{
const_vec2, Mat4, UVec2, UVec3, Vec2, Vec3, Vec3A, Vec3Swizzles, Vec4, Vec4Swizzles,
@ -10,7 +9,6 @@ use bevy_render::{
camera::{Camera, CameraProjection, OrthographicProjection},
color::Color,
extract_resource::ExtractResource,
prelude::Image,
primitives::{Aabb, CubemapFrusta, Frustum, Plane, Sphere},
render_resource::BufferBindingType,
renderer::RenderDevice,
@ -18,7 +16,6 @@ use bevy_render::{
};
use bevy_transform::components::GlobalTransform;
use bevy_utils::tracing::warn;
use bevy_window::Windows;
use crate::{
calculate_cluster_factors, CubeMapFace, CubemapVisibleEntities, ViewClusterBindings,
@ -637,8 +634,6 @@ impl GlobalVisiblePointLights {
pub(crate) fn assign_lights_to_clusters(
mut commands: Commands,
mut global_lights: ResMut<GlobalVisiblePointLights>,
windows: Res<Windows>,
images: Res<Assets<Image>>,
mut views: Query<(
Entity,
&GlobalTransform,
@ -741,13 +736,12 @@ pub(crate) fn assign_lights_to_clusters(
continue;
}
let screen_size =
if let Some(screen_size) = camera.target.get_physical_size(&windows, &images) {
screen_size
} else {
clusters.clear();
continue;
};
let screen_size = if let Some(screen_size) = camera.physical_target_size {
screen_size
} else {
clusters.clear();
continue;
};
let mut requested_cluster_dimensions = config.dimensions_for_screen_size(screen_size);

View file

@ -4,7 +4,7 @@ use crate::{
};
use bevy_app::{App, Plugin};
use bevy_asset::{AddAsset, Asset, AssetServer, Handle};
use bevy_core_pipeline::{AlphaMask3d, Opaque3d, Transparent3d};
use bevy_core_pipeline::core_3d::{AlphaMask3d, Opaque3d, Transparent3d};
use bevy_ecs::{
entity::Entity,
prelude::World,

View file

@ -4,7 +4,7 @@ use crate::{
PointLight, PointLightShadowMap, SetMeshBindGroup, VisiblePointLights, SHADOW_SHADER_HANDLE,
};
use bevy_asset::Handle;
use bevy_core_pipeline::Transparent3d;
use bevy_core_pipeline::core_3d::Transparent3d;
use bevy_ecs::{
prelude::*,
system::{lifetimeless::*, SystemParamItem},

View file

@ -2,7 +2,7 @@ use crate::MeshPipeline;
use crate::{DrawMesh, MeshPipelineKey, MeshUniform, SetMeshBindGroup, SetMeshViewBindGroup};
use bevy_app::Plugin;
use bevy_asset::{load_internal_asset, Handle, HandleUntyped};
use bevy_core_pipeline::Opaque3d;
use bevy_core_pipeline::core_3d::Opaque3d;
use bevy_ecs::{prelude::*, reflect::ReflectComponent};
use bevy_reflect::std_traits::ReflectDefault;
use bevy_reflect::{Reflect, TypeUuid};

View file

@ -1,164 +0,0 @@
use super::{CameraProjection, ScalingMode};
use crate::{
camera::{Camera, DepthCalculation, OrthographicProjection, PerspectiveProjection},
primitives::Frustum,
view::VisibleEntities,
};
use bevy_ecs::reflect::ReflectComponent;
use bevy_ecs::{bundle::Bundle, prelude::Component};
use bevy_math::Vec3;
use bevy_reflect::Reflect;
use bevy_transform::components::{GlobalTransform, Transform};
#[derive(Component, Default, Reflect)]
#[reflect(Component)]
pub struct Camera3d;
#[derive(Component, Default, Reflect)]
#[reflect(Component)]
pub struct Camera2d;
/// Component bundle for camera entities with perspective projection
///
/// Use this for 3D rendering.
#[derive(Bundle)]
pub struct PerspectiveCameraBundle<M: Component> {
pub camera: Camera,
pub perspective_projection: PerspectiveProjection,
pub visible_entities: VisibleEntities,
pub frustum: Frustum,
pub transform: Transform,
pub global_transform: GlobalTransform,
pub marker: M,
}
impl Default for PerspectiveCameraBundle<Camera3d> {
fn default() -> Self {
PerspectiveCameraBundle::new_3d()
}
}
impl PerspectiveCameraBundle<Camera3d> {
pub fn new_3d() -> Self {
PerspectiveCameraBundle::new()
}
}
impl<M: Component + Default> PerspectiveCameraBundle<M> {
pub fn new() -> Self {
let perspective_projection = PerspectiveProjection::default();
let view_projection = perspective_projection.get_projection_matrix();
let frustum = Frustum::from_view_projection(
&view_projection,
&Vec3::ZERO,
&Vec3::Z,
perspective_projection.far(),
);
PerspectiveCameraBundle {
camera: Camera::default(),
perspective_projection,
visible_entities: VisibleEntities::default(),
frustum,
transform: Default::default(),
global_transform: Default::default(),
marker: M::default(),
}
}
}
/// Component bundle for camera entities with orthographic projection
///
/// Use this for 2D games, isometric games, CAD-like 3D views.
#[derive(Bundle)]
pub struct OrthographicCameraBundle<M: Component> {
pub camera: Camera,
pub orthographic_projection: OrthographicProjection,
pub visible_entities: VisibleEntities,
pub frustum: Frustum,
pub transform: Transform,
pub global_transform: GlobalTransform,
pub marker: M,
}
impl OrthographicCameraBundle<Camera3d> {
pub fn new_3d() -> Self {
let orthographic_projection = OrthographicProjection {
scaling_mode: ScalingMode::FixedVertical(2.0),
depth_calculation: DepthCalculation::Distance,
..Default::default()
};
let view_projection = orthographic_projection.get_projection_matrix();
let frustum = Frustum::from_view_projection(
&view_projection,
&Vec3::ZERO,
&Vec3::Z,
orthographic_projection.far(),
);
OrthographicCameraBundle {
camera: Camera::default(),
orthographic_projection,
visible_entities: VisibleEntities::default(),
frustum,
transform: Default::default(),
global_transform: Default::default(),
marker: Camera3d,
}
}
}
impl OrthographicCameraBundle<Camera2d> {
/// Create an orthographic projection camera to render 2D content.
///
/// The projection creates a camera space where X points to the right of the screen,
/// Y points to the top of the screen, and Z points out of the screen (backward),
/// forming a right-handed coordinate system. The center of the screen is at `X=0` and
/// `Y=0`.
///
/// The default scaling mode is [`ScalingMode::WindowSize`], resulting in a resolution
/// where 1 unit in X and Y in camera space corresponds to 1 logical pixel on the screen.
/// That is, for a screen of 1920 pixels in width, the X coordinates visible on screen go
/// from `X=-960` to `X=+960` in world space, left to right. This can be changed by changing
/// the [`OrthographicProjection::scaling_mode`] field.
///
/// The camera is placed at `Z=+1000-0.1`, looking toward the world origin `(0,0,0)`.
/// Its orthographic projection extends from `0.0` to `-1000.0` in camera view space,
/// corresponding to `Z=+999.9` (closest to camera) to `Z=-0.1` (furthest away from
/// camera) in world space.
pub fn new_2d() -> Self {
Self::new_2d_with_far(1000.0)
}
/// Create an orthographic projection camera with a custom Z position.
///
/// The camera is placed at `Z=far-0.1`, looking toward the world origin `(0,0,0)`.
/// Its orthographic projection extends from `0.0` to `-far` in camera view space,
/// corresponding to `Z=far-0.1` (closest to camera) to `Z=-0.1` (furthest away from
/// camera) in world space.
pub fn new_2d_with_far(far: f32) -> Self {
// we want 0 to be "closest" and +far to be "farthest" in 2d, so we offset
// the camera's translation by far and use a right handed coordinate system
let orthographic_projection = OrthographicProjection {
far,
depth_calculation: DepthCalculation::ZDifference,
..Default::default()
};
let transform = Transform::from_xyz(0.0, 0.0, far - 0.1);
let view_projection =
orthographic_projection.get_projection_matrix() * transform.compute_matrix().inverse();
let frustum = Frustum::from_view_projection(
&view_projection,
&transform.translation,
&transform.back(),
orthographic_projection.far(),
);
OrthographicCameraBundle {
camera: Camera::default(),
orthographic_projection,
visible_entities: VisibleEntities::default(),
frustum,
transform,
global_transform: Default::default(),
marker: Camera2d,
}
}
}

View file

@ -1,24 +1,20 @@
use std::marker::PhantomData;
use crate::{
camera::CameraProjection,
prelude::Image,
render_asset::RenderAssets,
render_resource::TextureView,
view::{ExtractedView, ExtractedWindows, VisibleEntities},
RenderApp, RenderStage,
};
use bevy_app::{App, CoreStage, Plugin, StartupStage};
use bevy_asset::{AssetEvent, Assets, Handle};
use bevy_derive::{Deref, DerefMut};
use bevy_ecs::{
change_detection::DetectChanges,
component::Component,
entity::Entity,
event::EventReader,
prelude::With,
query::Added,
reflect::ReflectComponent,
system::{Commands, ParamSet, Query, Res, ResMut},
system::{Commands, ParamSet, Query, Res},
};
use bevy_math::{Mat4, UVec2, Vec2, Vec3};
use bevy_reflect::prelude::*;
@ -26,19 +22,94 @@ use bevy_transform::components::GlobalTransform;
use bevy_utils::HashSet;
use bevy_window::{WindowCreated, WindowId, WindowResized, Windows};
use serde::{Deserialize, Serialize};
use std::borrow::Cow;
use wgpu::Extent3d;
#[derive(Component, Default, Debug, Reflect, Clone)]
#[reflect(Component, Default)]
#[derive(Component, Debug, Reflect, Clone)]
#[reflect(Component)]
pub struct Camera {
pub projection_matrix: Mat4,
pub logical_target_size: Option<Vec2>,
pub physical_target_size: Option<UVec2>,
pub priority: isize,
pub is_active: bool,
#[reflect(ignore)]
pub target: RenderTarget,
#[reflect(ignore)]
pub depth_calculation: DepthCalculation,
}
#[derive(Debug, Clone, Reflect, PartialEq, Eq, Hash)]
impl Default for Camera {
fn default() -> Self {
Self {
is_active: true,
priority: 0,
projection_matrix: Default::default(),
logical_target_size: Default::default(),
physical_target_size: Default::default(),
target: Default::default(),
depth_calculation: Default::default(),
}
}
}
impl Camera {
/// Given a position in world space, use the camera to compute the viewport-space coordinates.
///
/// To get the coordinates in Normalized Device Coordinates, you should use
/// [`world_to_ndc`](Self::world_to_ndc).
pub fn world_to_viewport(
&self,
camera_transform: &GlobalTransform,
world_position: Vec3,
) -> Option<Vec2> {
let target_size = self.logical_target_size?;
let ndc_space_coords = self.world_to_ndc(camera_transform, world_position)?;
// NDC z-values outside of 0 < z < 1 are outside the camera frustum and are thus not in viewport-space
if ndc_space_coords.z < 0.0 || ndc_space_coords.z > 1.0 {
return None;
}
// Once in NDC space, we can discard the z element and rescale x/y to fit the screen
Some((ndc_space_coords.truncate() + Vec2::ONE) / 2.0 * target_size)
}
/// Given a position in world space, use the camera's viewport to compute the Normalized Device Coordinates.
///
/// Values returned will be between -1.0 and 1.0 when the position is within the viewport.
/// To get the coordinates in the render target's viewport dimensions, you should use
/// [`world_to_viewport`](Self::world_to_viewport).
pub fn world_to_ndc(
&self,
camera_transform: &GlobalTransform,
world_position: Vec3,
) -> Option<Vec3> {
// Build a transform to convert from world to NDC using camera data
let world_to_ndc: Mat4 =
self.projection_matrix * camera_transform.compute_matrix().inverse();
let ndc_space_coords: Vec3 = world_to_ndc.project_point3(world_position);
if !ndc_space_coords.is_nan() {
Some(ndc_space_coords)
} else {
None
}
}
}
/// Configures the [`RenderGraph`](crate::render_graph::RenderGraph) name assigned to be run for a given [`Camera`] entity.
#[derive(Component, Deref, DerefMut, Reflect, Default)]
#[reflect(Component)]
pub struct CameraRenderGraph(Cow<'static, str>);
impl CameraRenderGraph {
#[inline]
pub fn new<T: Into<Cow<'static, str>>>(name: T) -> Self {
Self(name.into())
}
}
#[derive(Debug, Clone, Reflect, PartialEq, Eq, Hash, PartialOrd, Ord)]
pub enum RenderTarget {
/// Window to which the camera's view is rendered.
Window(WindowId),
@ -118,52 +189,6 @@ impl Default for DepthCalculation {
}
}
impl Camera {
/// Given a position in world space, use the camera to compute the screen space coordinates.
///
/// To get the coordinates in Normalized Device Coordinates, you should use
/// [`world_to_ndc`](Self::world_to_ndc).
pub fn world_to_screen(
&self,
windows: &Windows,
images: &Assets<Image>,
camera_transform: &GlobalTransform,
world_position: Vec3,
) -> Option<Vec2> {
let window_size = self.target.get_logical_size(windows, images)?;
let ndc_space_coords = self.world_to_ndc(camera_transform, world_position)?;
// NDC z-values outside of 0 < z < 1 are outside the camera frustum and are thus not in screen space
if ndc_space_coords.z < 0.0 || ndc_space_coords.z > 1.0 {
return None;
}
// Once in NDC space, we can discard the z element and rescale x/y to fit the screen
Some((ndc_space_coords.truncate() + Vec2::ONE) / 2.0 * window_size)
}
/// Given a position in world space, use the camera to compute the Normalized Device Coordinates.
///
/// Values returned will be between -1.0 and 1.0 when the position is in screen space.
/// To get the coordinates in the render target dimensions, you should use
/// [`world_to_screen`](Self::world_to_screen).
pub fn world_to_ndc(
&self,
camera_transform: &GlobalTransform,
world_position: Vec3,
) -> Option<Vec3> {
// Build a transform to convert from world to NDC using camera data
let world_to_ndc: Mat4 =
self.projection_matrix * camera_transform.compute_matrix().inverse();
let ndc_space_coords: Vec3 = world_to_ndc.project_point3(world_position);
if !ndc_space_coords.is_nan() {
Some(ndc_space_coords)
} else {
None
}
}
}
pub fn camera_system<T: CameraProjection + Component>(
mut window_resized_events: EventReader<WindowResized>,
mut window_created_events: EventReader<WindowCreated>,
@ -218,7 +243,9 @@ pub fn camera_system<T: CameraProjection + Component>(
|| added_cameras.contains(&entity)
|| camera_projection.is_changed()
{
if let Some(size) = camera.target.get_logical_size(&windows, &images) {
camera.logical_target_size = camera.target.get_logical_size(&windows, &images);
camera.physical_target_size = camera.target.get_physical_size(&windows, &images);
if let Some(size) = camera.logical_target_size {
camera_projection.update(size.x, size.y);
camera.projection_matrix = camera_projection.get_projection_matrix();
camera.depth_calculation = camera_projection.depth_calculation();
@ -227,116 +254,44 @@ pub fn camera_system<T: CameraProjection + Component>(
}
}
pub struct CameraTypePlugin<T: Component + Default>(PhantomData<T>);
impl<T: Component + Default> Default for CameraTypePlugin<T> {
fn default() -> Self {
Self(Default::default())
}
}
impl<T: Component + Default> Plugin for CameraTypePlugin<T> {
fn build(&self, app: &mut App) {
app.init_resource::<ActiveCamera<T>>()
.add_startup_system_to_stage(StartupStage::PostStartup, set_active_camera::<T>)
.add_system_to_stage(CoreStage::PostUpdate, set_active_camera::<T>);
if let Ok(render_app) = app.get_sub_app_mut(RenderApp) {
render_app.add_system_to_stage(RenderStage::Extract, extract_cameras::<T>);
}
}
}
/// The canonical source of the "active camera" of the given camera type `T`.
#[derive(Debug)]
pub struct ActiveCamera<T: Component> {
camera: Option<Entity>,
marker: PhantomData<T>,
}
impl<T: Component> Default for ActiveCamera<T> {
fn default() -> Self {
Self {
camera: Default::default(),
marker: Default::default(),
}
}
}
impl<T: Component> Clone for ActiveCamera<T> {
fn clone(&self) -> Self {
Self {
camera: self.camera,
marker: self.marker,
}
}
}
impl<T: Component> ActiveCamera<T> {
/// Sets the active camera to the given `camera` entity.
pub fn set(&mut self, camera: Entity) {
self.camera = Some(camera);
}
/// Returns the active camera, if it exists.
pub fn get(&self) -> Option<Entity> {
self.camera
}
}
pub fn set_active_camera<T: Component>(
mut active_camera: ResMut<ActiveCamera<T>>,
cameras: Query<Entity, (With<Camera>, With<T>)>,
) {
// Check if there is already an active camera set and
// that it has not been deleted on the previous frame
if let Some(camera) = active_camera.get() {
if cameras.contains(camera) {
return;
}
}
// If the previous active camera ceased to exist
// fallback to another camera of the same type T
if let Some(camera) = cameras.iter().next() {
active_camera.camera = Some(camera);
} else {
active_camera.camera = None;
}
}
#[derive(Component, Debug)]
pub struct ExtractedCamera {
pub target: RenderTarget,
pub physical_size: Option<UVec2>,
pub render_graph: Cow<'static, str>,
pub priority: isize,
}
pub fn extract_cameras<M: Component + Default>(
pub fn extract_cameras(
mut commands: Commands,
windows: Res<Windows>,
images: Res<Assets<Image>>,
active_camera: Res<ActiveCamera<M>>,
query: Query<(&Camera, &GlobalTransform, &VisibleEntities), With<M>>,
query: Query<(
Entity,
&Camera,
&CameraRenderGraph,
&GlobalTransform,
&VisibleEntities,
)>,
) {
if let Some(entity) = active_camera.get() {
if let Ok((camera, transform, visible_entities)) = query.get(entity) {
if let Some(size) = camera.target.get_physical_size(&windows, &images) {
commands.get_or_spawn(entity).insert_bundle((
ExtractedCamera {
target: camera.target.clone(),
physical_size: camera.target.get_physical_size(&windows, &images),
},
ExtractedView {
projection: camera.projection_matrix,
transform: *transform,
width: size.x,
height: size.y,
},
visible_entities.clone(),
M::default(),
));
}
for (entity, camera, camera_render_graph, transform, visible_entities) in query.iter() {
if !camera.is_active {
continue;
}
if let Some(size) = camera.physical_target_size {
commands.get_or_spawn(entity).insert_bundle((
ExtractedCamera {
target: camera.target.clone(),
physical_size: Some(size),
render_graph: camera_render_graph.0.clone(),
priority: camera.priority,
},
ExtractedView {
projection: camera.projection_matrix,
transform: *transform,
width: size.x,
height: size.y,
},
visible_entities.clone(),
));
}
}
commands.insert_resource(active_camera.clone());
}

View file

@ -0,0 +1,109 @@
use crate::{
camera::{ExtractedCamera, RenderTarget},
render_graph::{Node, NodeRunError, RenderGraphContext, SlotValue},
renderer::RenderContext,
view::ExtractedWindows,
};
use bevy_ecs::{entity::Entity, prelude::QueryState, world::World};
use bevy_utils::{tracing::warn, HashSet};
use wgpu::{LoadOp, Operations, RenderPassColorAttachment, RenderPassDescriptor};
pub struct CameraDriverNode {
cameras: QueryState<(Entity, &'static ExtractedCamera)>,
}
impl CameraDriverNode {
pub fn new(world: &mut World) -> Self {
Self {
cameras: world.query(),
}
}
}
impl Node for CameraDriverNode {
fn update(&mut self, world: &mut World) {
self.cameras.update_archetypes(world);
}
fn run(
&self,
graph: &mut RenderGraphContext,
render_context: &mut RenderContext,
world: &World,
) -> Result<(), NodeRunError> {
let mut sorted_cameras = self
.cameras
.iter_manual(world)
.map(|(e, c)| (e, c.priority, c.target.clone()))
.collect::<Vec<_>>();
// sort by priority and ensure within a priority, RenderTargets of the same type are packed together
sorted_cameras.sort_by(|(_, p1, t1), (_, p2, t2)| match p1.cmp(p2) {
std::cmp::Ordering::Equal => t1.cmp(t2),
ord => ord,
});
let mut camera_windows = HashSet::new();
let mut previous_priority_target = None;
let mut ambiguities = HashSet::new();
for (entity, priority, target) in sorted_cameras {
let new_priority_target = (priority, target);
if let Some(previous_priority_target) = previous_priority_target {
if previous_priority_target == new_priority_target {
ambiguities.insert(new_priority_target.clone());
}
}
previous_priority_target = Some(new_priority_target);
if let Ok((_, camera)) = self.cameras.get_manual(world, entity) {
if let RenderTarget::Window(id) = camera.target {
camera_windows.insert(id);
}
graph
.run_sub_graph(camera.render_graph.clone(), vec![SlotValue::Entity(entity)])?;
}
}
if !ambiguities.is_empty() {
warn!(
"Camera priority ambiguities detected for active cameras with the following priorities: {:?}. \
To fix this, ensure there is exactly one Camera entity spawned with a given priority for a given RenderTarget. \
Ambiguities should be resolved because either (1) multiple active cameras were spawned accidentally, which will \
result in rendering multiple instances of the scene or (2) for cases where multiple active cameras is intentional, \
ambiguities could result in unpredictable render results.",
ambiguities
);
}
// wgpu (and some backends) require doing work for swap chains if you call `get_current_texture()` and `present()`
// This ensures that Bevy doesn't crash, even when there are no cameras (and therefore no work submitted).
for (id, window) in world.resource::<ExtractedWindows>().iter() {
if camera_windows.contains(id) {
continue;
}
let swap_chain_texture = if let Some(swap_chain_texture) = &window.swap_chain_texture {
swap_chain_texture
} else {
continue;
};
#[cfg(feature = "trace")]
let _span = bevy_utils::tracing::info_span!("no_camera_clear_pass").entered();
let pass_descriptor = RenderPassDescriptor {
label: Some("no_camera_clear_pass"),
color_attachments: &[RenderPassColorAttachment {
view: swap_chain_texture,
resolve_target: None,
ops: Operations {
load: LoadOp::Clear(wgpu::Color::BLACK),
store: true,
},
}],
depth_stencil_attachment: None,
};
render_context
.command_encoder
.begin_render_pass(&pass_descriptor);
}
Ok(())
}
}

View file

@ -1,19 +1,19 @@
mod bundle;
#[allow(clippy::module_inception)]
mod camera;
mod camera_driver_node;
mod projection;
pub use bundle::*;
pub use camera::*;
pub use camera_driver_node::*;
pub use projection::*;
use crate::{
primitives::Aabb,
render_graph::RenderGraph,
view::{ComputedVisibility, Visibility, VisibleEntities},
RenderApp, RenderStage,
};
use bevy_app::{App, CoreStage, Plugin};
use bevy_ecs::schedule::ParallelSystemDescriptorCoercion;
use bevy_window::ModifiesWindows;
use bevy_app::{App, Plugin};
#[derive(Default)]
pub struct CameraPlugin;
@ -23,24 +23,22 @@ impl Plugin for CameraPlugin {
app.register_type::<Camera>()
.register_type::<Visibility>()
.register_type::<ComputedVisibility>()
.register_type::<OrthographicProjection>()
.register_type::<PerspectiveProjection>()
.register_type::<VisibleEntities>()
.register_type::<WindowOrigin>()
.register_type::<ScalingMode>()
.register_type::<DepthCalculation>()
.register_type::<Aabb>()
.register_type::<Camera3d>()
.register_type::<Camera2d>()
.add_system_to_stage(
CoreStage::PostUpdate,
crate::camera::camera_system::<OrthographicProjection>.after(ModifiesWindows),
)
.add_system_to_stage(
CoreStage::PostUpdate,
crate::camera::camera_system::<PerspectiveProjection>.after(ModifiesWindows),
)
.add_plugin(CameraTypePlugin::<Camera3d>::default())
.add_plugin(CameraTypePlugin::<Camera2d>::default());
.register_type::<CameraRenderGraph>()
.add_plugin(CameraProjectionPlugin::<Projection>::default())
.add_plugin(CameraProjectionPlugin::<OrthographicProjection>::default())
.add_plugin(CameraProjectionPlugin::<PerspectiveProjection>::default());
if let Ok(render_app) = app.get_sub_app_mut(RenderApp) {
render_app.add_system_to_stage(RenderStage::Extract, extract_cameras);
let camera_driver_node = CameraDriverNode::new(&mut render_app.world);
let mut render_graph = render_app.world.resource_mut::<RenderGraph>();
render_graph.add_node(crate::main_graph::node::CAMERA_DRIVER, camera_driver_node);
}
}
}

View file

@ -1,10 +1,41 @@
use std::marker::PhantomData;
use super::DepthCalculation;
use bevy_ecs::{component::Component, reflect::ReflectComponent};
use bevy_app::{App, CoreStage, Plugin, StartupStage};
use bevy_ecs::{prelude::*, reflect::ReflectComponent};
use bevy_math::Mat4;
use bevy_reflect::std_traits::ReflectDefault;
use bevy_reflect::{Reflect, ReflectDeserialize};
use bevy_reflect::{std_traits::ReflectDefault, GetTypeRegistration, Reflect, ReflectDeserialize};
use bevy_window::ModifiesWindows;
use serde::{Deserialize, Serialize};
/// Adds [`Camera`](crate::camera::Camera) driver systems for a given projection type.
pub struct CameraProjectionPlugin<T: CameraProjection>(PhantomData<T>);
impl<T: CameraProjection> Default for CameraProjectionPlugin<T> {
fn default() -> Self {
Self(Default::default())
}
}
#[derive(SystemLabel, Clone, Eq, PartialEq, Hash, Debug)]
pub struct CameraUpdateSystem;
impl<T: CameraProjection + Component + GetTypeRegistration> Plugin for CameraProjectionPlugin<T> {
fn build(&self, app: &mut App) {
app.register_type::<T>()
.add_startup_system_to_stage(
StartupStage::PostStartup,
crate::camera::camera_system::<T>,
)
.add_system_to_stage(
CoreStage::PostUpdate,
crate::camera::camera_system::<T>
.label(CameraUpdateSystem)
.after(ModifiesWindows),
);
}
}
pub trait CameraProjection {
fn get_projection_matrix(&self) -> Mat4;
fn update(&mut self, width: f32, height: f32);
@ -12,6 +43,62 @@ pub trait CameraProjection {
fn far(&self) -> f32;
}
/// A configurable [`CameraProjection`] that can select its projection type at runtime.
#[derive(Component, Debug, Clone, Reflect)]
#[reflect(Component, Default)]
pub enum Projection {
Perspective(PerspectiveProjection),
Orthographic(OrthographicProjection),
}
impl From<PerspectiveProjection> for Projection {
fn from(p: PerspectiveProjection) -> Self {
Self::Perspective(p)
}
}
impl From<OrthographicProjection> for Projection {
fn from(p: OrthographicProjection) -> Self {
Self::Orthographic(p)
}
}
impl CameraProjection for Projection {
fn get_projection_matrix(&self) -> Mat4 {
match self {
Projection::Perspective(projection) => projection.get_projection_matrix(),
Projection::Orthographic(projection) => projection.get_projection_matrix(),
}
}
fn update(&mut self, width: f32, height: f32) {
match self {
Projection::Perspective(projection) => projection.update(width, height),
Projection::Orthographic(projection) => projection.update(width, height),
}
}
fn depth_calculation(&self) -> DepthCalculation {
match self {
Projection::Perspective(projection) => projection.depth_calculation(),
Projection::Orthographic(projection) => projection.depth_calculation(),
}
}
fn far(&self) -> f32 {
match self {
Projection::Perspective(projection) => projection.far(),
Projection::Orthographic(projection) => projection.far(),
}
}
}
impl Default for Projection {
fn default() -> Self {
Projection::Perspective(Default::default())
}
}
#[derive(Component, Debug, Clone, Reflect)]
#[reflect(Component, Default)]
pub struct PerspectiveProjection {

View file

@ -18,10 +18,7 @@ pub mod view;
pub mod prelude {
#[doc(hidden)]
pub use crate::{
camera::{
Camera, OrthographicCameraBundle, OrthographicProjection, PerspectiveCameraBundle,
PerspectiveProjection,
},
camera::{Camera, OrthographicProjection, PerspectiveProjection},
color::Color,
mesh::{shape, Mesh},
render_resource::Shader,
@ -30,7 +27,6 @@ pub mod prelude {
};
}
use bevy_utils::tracing::debug;
pub use once_cell;
use crate::{
@ -47,6 +43,7 @@ use crate::{
use bevy_app::{App, AppLabel, Plugin};
use bevy_asset::{AddAsset, AssetServer};
use bevy_ecs::prelude::*;
use bevy_utils::tracing::debug;
use std::ops::{Deref, DerefMut};
/// Contains the default Bevy rendering backend based on wgpu.
@ -99,6 +96,12 @@ impl DerefMut for RenderWorld {
}
}
pub mod main_graph {
pub mod node {
pub const CAMERA_DRIVER: &str = "camera_driver";
}
}
/// A Label for the rendering sub-app.
#[derive(Debug, Clone, Copy, Hash, PartialEq, Eq, AppLabel)]
pub struct RenderApp;
@ -171,13 +174,13 @@ impl Plugin for RenderPlugin {
.with_system(render_system.exclusive_system().at_end()),
)
.add_stage(RenderStage::Cleanup, SystemStage::parallel())
.init_resource::<RenderGraph>()
.insert_resource(instance)
.insert_resource(device)
.insert_resource(queue)
.insert_resource(adapter_info)
.insert_resource(pipeline_cache)
.insert_resource(asset_server)
.init_resource::<RenderGraph>();
.insert_resource(asset_server);
app.add_sub_app(RenderApp, render_app, move |app_world, render_app| {
#[cfg(feature = "trace")]

View file

@ -1,7 +1,7 @@
use crate::{
render_graph::{
Edge, InputSlotError, OutputSlotError, RenderGraphContext, RenderGraphError,
RunSubGraphError, SlotInfo, SlotInfos,
RunSubGraphError, SlotInfo, SlotInfos, SlotType, SlotValue,
},
renderer::RenderContext,
};
@ -331,3 +331,37 @@ impl Node for EmptyNode {
Ok(())
}
}
/// A [`RenderGraph`](super::RenderGraph) [`Node`] that takes a view entity as input and runs the configured graph name once.
/// This makes it easier to insert sub-graph runs into a graph.
pub struct RunGraphOnViewNode {
graph_name: Cow<'static, str>,
}
impl RunGraphOnViewNode {
pub const IN_VIEW: &'static str = "view";
pub fn new<T: Into<Cow<'static, str>>>(graph_name: T) -> Self {
Self {
graph_name: graph_name.into(),
}
}
}
impl Node for RunGraphOnViewNode {
fn input(&self) -> Vec<SlotInfo> {
vec![SlotInfo::new(Self::IN_VIEW, SlotType::Entity)]
}
fn run(
&self,
graph: &mut RenderGraphContext,
_render_context: &mut RenderContext,
_world: &World,
) -> Result<(), NodeRunError> {
let view_entity = graph.get_input_entity(Self::IN_VIEW)?;
graph.run_sub_graph(
self.graph_name.clone(),
vec![SlotValue::Entity(view_entity)],
)?;
Ok(())
}
}

View file

@ -12,7 +12,7 @@ use bevy_transform::components::GlobalTransform;
use bevy_transform::TransformSystem;
use crate::{
camera::{Camera, CameraProjection, OrthographicProjection, PerspectiveProjection},
camera::{Camera, CameraProjection, OrthographicProjection, PerspectiveProjection, Projection},
mesh::Mesh,
primitives::{Aabb, Frustum, Sphere},
};
@ -73,6 +73,7 @@ pub enum VisibilitySystems {
CalculateBounds,
UpdateOrthographicFrusta,
UpdatePerspectiveFrusta,
UpdateProjectionFrusta,
CheckVisibility,
}
@ -98,6 +99,12 @@ impl Plugin for VisibilityPlugin {
.label(UpdatePerspectiveFrusta)
.after(TransformSystem::TransformPropagate),
)
.add_system_to_stage(
CoreStage::PostUpdate,
update_frusta::<Projection>
.label(UpdateProjectionFrusta)
.after(TransformSystem::TransformPropagate),
)
.add_system_to_stage(
CoreStage::PostUpdate,
check_visibility
@ -105,6 +112,7 @@ impl Plugin for VisibilityPlugin {
.after(CalculateBounds)
.after(UpdateOrthographicFrusta)
.after(UpdatePerspectiveFrusta)
.after(UpdateProjectionFrusta)
.after(TransformSystem::TransformPropagate),
);
}

View file

@ -30,7 +30,7 @@ pub use texture_atlas_builder::*;
use bevy_app::prelude::*;
use bevy_asset::{AddAsset, Assets, HandleUntyped};
use bevy_core_pipeline::Transparent2d;
use bevy_core_pipeline::core_2d::Transparent2d;
use bevy_ecs::schedule::{ParallelSystemDescriptorCoercion, SystemLabel};
use bevy_reflect::TypeUuid;
use bevy_render::{

View file

@ -1,6 +1,6 @@
use bevy_app::{App, Plugin};
use bevy_asset::{AddAsset, Asset, AssetServer, Handle};
use bevy_core_pipeline::Transparent2d;
use bevy_core_pipeline::core_2d::Transparent2d;
use bevy_ecs::{
entity::Entity,
prelude::{Bundle, World},

View file

@ -5,7 +5,7 @@ use crate::{
Rect, Sprite, SPRITE_SHADER_HANDLE,
};
use bevy_asset::{AssetEvent, Assets, Handle, HandleId};
use bevy_core_pipeline::Transparent2d;
use bevy_core_pipeline::core_2d::Transparent2d;
use bevy_ecs::{
prelude::*,
system::{lifetimeless::*, SystemParamItem},

View file

@ -48,7 +48,7 @@ impl Default for Text2dBounds {
}
}
/// The bundle of components needed to draw text in a 2D scene via a 2D `OrthographicCameraBundle`.
/// The bundle of components needed to draw text in a 2D scene via a 2D `Camera2dBundle`.
/// [Example usage.](https://github.com/bevyengine/bevy/blob/latest/examples/2d/text2d.rs)
#[derive(Bundle, Clone, Debug, Default)]
pub struct Text2dBundle {

View file

@ -4,11 +4,12 @@ use crate::{
widget::{Button, ImageMode},
CalculatedSize, FocusPolicy, Interaction, Node, Style, UiColor, UiImage,
};
use bevy_ecs::{bundle::Bundle, prelude::Component};
use bevy_render::{
camera::{Camera, DepthCalculation, OrthographicProjection, WindowOrigin},
view::{Visibility, VisibleEntities},
use bevy_ecs::{
bundle::Bundle,
prelude::{Component, With},
query::QueryItem,
};
use bevy_render::{camera::Camera, extract_component::ExtractComponent, view::Visibility};
use bevy_text::Text;
use bevy_transform::prelude::{GlobalTransform, Transform};
@ -135,45 +136,22 @@ impl Default for ButtonBundle {
}
}
}
#[derive(Component, Default)]
pub struct CameraUi;
/// The camera that is needed to see UI elements
#[derive(Bundle, Debug)]
pub struct UiCameraBundle<M: Component> {
/// The camera component
pub camera: Camera,
/// The orthographic projection settings
pub orthographic_projection: OrthographicProjection,
/// The transform of the camera
pub transform: Transform,
/// The global transform of the camera
pub global_transform: GlobalTransform,
/// Contains visible entities
// FIXME there is no frustrum culling for UI
pub visible_entities: VisibleEntities,
pub marker: M,
#[derive(Component, Clone)]
pub struct CameraUi {
pub is_enabled: bool,
}
impl Default for UiCameraBundle<CameraUi> {
impl Default for CameraUi {
fn default() -> Self {
// we want 0 to be "closest" and +far to be "farthest" in 2d, so we offset
// the camera's translation by far and use a right handed coordinate system
let far = 1000.0;
UiCameraBundle {
camera: Camera {
..Default::default()
},
orthographic_projection: OrthographicProjection {
far,
window_origin: WindowOrigin::BottomLeft,
depth_calculation: DepthCalculation::ZDifference,
..Default::default()
},
transform: Transform::from_xyz(0.0, 0.0, far - 0.1),
global_transform: Default::default(),
visible_entities: Default::default(),
marker: CameraUi,
}
Self { is_enabled: true }
}
}
impl ExtractComponent for CameraUi {
type Query = &'static Self;
type Filter = With<Camera>;
fn extract_component(item: QueryItem<Self::Query>) -> Self {
item.clone()
}
}

View file

@ -1,6 +1,6 @@
//! This crate contains Bevy's UI system, which can be used to create UI for both 2D and 3D games
//! # Basic usage
//! Spawn [`entity::UiCameraBundle`] and spawn UI elements with [`entity::ButtonBundle`], [`entity::ImageBundle`], [`entity::TextBundle`] and [`entity::NodeBundle`]
//! Spawn UI elements with [`entity::ButtonBundle`], [`entity::ImageBundle`], [`entity::TextBundle`] and [`entity::NodeBundle`]
//! This UI is laid out with the Flexbox paradigm (see <https://cssreference.io/flexbox/> ) except the vertical axis is inverted
mod flex;
mod focus;
@ -12,7 +12,7 @@ pub mod entity;
pub mod update;
pub mod widget;
use bevy_render::camera::CameraTypePlugin;
use bevy_render::extract_component::ExtractComponentPlugin;
pub use flex::*;
pub use focus::*;
pub use geometry::*;
@ -50,7 +50,7 @@ pub enum UiSystem {
impl Plugin for UiPlugin {
fn build(&self, app: &mut App) {
app.add_plugin(CameraTypePlugin::<CameraUi>::default())
app.add_plugin(ExtractComponentPlugin::<CameraUi>::default())
.init_resource::<FlexSurface>()
.register_type::<AlignContent>()
.register_type::<AlignItems>()

View file

@ -1,18 +0,0 @@
use bevy_ecs::prelude::*;
use bevy_render::{camera::ActiveCamera, render_phase::RenderPhase};
use crate::prelude::CameraUi;
use super::TransparentUi;
/// Inserts the [`RenderPhase`] into the UI camera
pub fn extract_ui_camera_phases(
mut commands: Commands,
active_camera: Res<ActiveCamera<CameraUi>>,
) {
if let Some(entity) = active_camera.get() {
commands
.get_or_spawn(entity)
.insert(RenderPhase::<TransparentUi>::default());
}
}

View file

@ -1,26 +1,26 @@
mod camera;
mod pipeline;
mod render_pass;
pub use camera::*;
use bevy_core_pipeline::{core_2d::Camera2d, core_3d::Camera3d};
pub use pipeline::*;
pub use render_pass::*;
use crate::{CalculatedClip, Node, UiColor, UiImage};
use crate::{prelude::CameraUi, CalculatedClip, Node, UiColor, UiImage};
use bevy_app::prelude::*;
use bevy_asset::{load_internal_asset, AssetEvent, Assets, Handle, HandleUntyped};
use bevy_ecs::prelude::*;
use bevy_math::{const_vec3, Mat4, Vec2, Vec3, Vec4Swizzles};
use bevy_reflect::TypeUuid;
use bevy_render::{
camera::{Camera, CameraProjection, DepthCalculation, OrthographicProjection, WindowOrigin},
color::Color,
render_asset::RenderAssets,
render_graph::{RenderGraph, SlotInfo, SlotType},
render_graph::{RenderGraph, RunGraphOnViewNode, SlotInfo, SlotType},
render_phase::{sort_phase_system, AddRenderCommand, DrawFunctions, RenderPhase},
render_resource::*,
renderer::{RenderDevice, RenderQueue},
texture::Image,
view::{ViewUniforms, Visibility},
view::{ExtractedView, ViewUniforms, Visibility},
RenderApp, RenderStage, RenderWorld,
};
use bevy_sprite::{Rect, SpriteAssetEvents, TextureAtlas};
@ -70,7 +70,14 @@ pub fn build_ui_render(app: &mut App) {
.init_resource::<ExtractedUiNodes>()
.init_resource::<DrawFunctions<TransparentUi>>()
.add_render_command::<TransparentUi, DrawUi>()
.add_system_to_stage(RenderStage::Extract, extract_ui_camera_phases)
.add_system_to_stage(
RenderStage::Extract,
extract_default_ui_camera_view::<Camera2d>,
)
.add_system_to_stage(
RenderStage::Extract,
extract_default_ui_camera_view::<Camera3d>,
)
.add_system_to_stage(
RenderStage::Extract,
extract_uinodes.label(RenderUiSystem::ExtractNode),
@ -84,16 +91,64 @@ pub fn build_ui_render(app: &mut App) {
.add_system_to_stage(RenderStage::PhaseSort, sort_phase_system::<TransparentUi>);
// Render graph
let ui_pass_node = UiPassNode::new(&mut render_app.world);
let ui_graph_2d = get_ui_graph(render_app);
let ui_graph_3d = get_ui_graph(render_app);
let mut graph = render_app.world.resource_mut::<RenderGraph>();
let mut draw_ui_graph = RenderGraph::default();
draw_ui_graph.add_node(draw_ui_graph::node::UI_PASS, ui_pass_node);
let input_node_id = draw_ui_graph.set_input(vec![SlotInfo::new(
if let Some(graph_2d) = graph.get_sub_graph_mut(bevy_core_pipeline::core_2d::graph::NAME) {
graph_2d.add_sub_graph(draw_ui_graph::NAME, ui_graph_2d);
graph_2d.add_node(
draw_ui_graph::node::UI_PASS,
RunGraphOnViewNode::new(draw_ui_graph::NAME),
);
graph_2d
.add_node_edge(
bevy_core_pipeline::core_2d::graph::node::MAIN_PASS,
draw_ui_graph::node::UI_PASS,
)
.unwrap();
graph_2d
.add_slot_edge(
graph_2d.input_node().unwrap().id,
bevy_core_pipeline::core_2d::graph::input::VIEW_ENTITY,
draw_ui_graph::node::UI_PASS,
RunGraphOnViewNode::IN_VIEW,
)
.unwrap();
}
if let Some(graph_3d) = graph.get_sub_graph_mut(bevy_core_pipeline::core_3d::graph::NAME) {
graph_3d.add_sub_graph(draw_ui_graph::NAME, ui_graph_3d);
graph_3d.add_node(
draw_ui_graph::node::UI_PASS,
RunGraphOnViewNode::new(draw_ui_graph::NAME),
);
graph_3d
.add_node_edge(
bevy_core_pipeline::core_3d::graph::node::MAIN_PASS,
draw_ui_graph::node::UI_PASS,
)
.unwrap();
graph_3d
.add_slot_edge(
graph_3d.input_node().unwrap().id,
bevy_core_pipeline::core_3d::graph::input::VIEW_ENTITY,
draw_ui_graph::node::UI_PASS,
RunGraphOnViewNode::IN_VIEW,
)
.unwrap();
}
}
fn get_ui_graph(render_app: &mut App) -> RenderGraph {
let ui_pass_node = UiPassNode::new(&mut render_app.world);
let mut ui_graph = RenderGraph::default();
ui_graph.add_node(draw_ui_graph::node::UI_PASS, ui_pass_node);
let input_node_id = ui_graph.set_input(vec![SlotInfo::new(
draw_ui_graph::input::VIEW_ENTITY,
SlotType::Entity,
)]);
draw_ui_graph
ui_graph
.add_slot_edge(
input_node_id,
draw_ui_graph::input::VIEW_ENTITY,
@ -101,15 +156,7 @@ pub fn build_ui_render(app: &mut App) {
UiPassNode::IN_VIEW,
)
.unwrap();
graph.add_sub_graph(draw_ui_graph::NAME, draw_ui_graph);
graph.add_node(node::UI_PASS_DRIVER, UiPassDriverNode);
graph
.add_node_edge(
bevy_core_pipeline::node::MAIN_PASS_DRIVER,
node::UI_PASS_DRIVER,
)
.unwrap();
ui_graph
}
pub struct ExtractedUiNode {
@ -163,6 +210,65 @@ pub fn extract_uinodes(
}
}
/// The UI camera is "moved back" by this many units (plus the [`UI_CAMERA_TRANSFORM_OFFSET`]) and also has a view
/// distance of this many units. This ensures that with a left-handed projection,
/// as ui elements are "stacked on top of each other", they are within the camera's view
/// and have room to grow.
// TODO: Consider computing this value at runtime based on the maximum z-value.
const UI_CAMERA_FAR: f32 = 1000.0;
// This value is subtracted from the far distance for the camera's z-position to ensure nodes at z == 0.0 are rendered
// TODO: Evaluate if we still need this.
const UI_CAMERA_TRANSFORM_OFFSET: f32 = -0.1;
#[derive(Component)]
pub struct DefaultCameraView(pub Entity);
pub fn extract_default_ui_camera_view<T: Component>(
mut commands: Commands,
render_world: Res<RenderWorld>,
query: Query<(Entity, &Camera, Option<&CameraUi>), With<T>>,
) {
for (entity, camera, camera_ui) in query.iter() {
// ignore cameras with disabled ui
if let Some(&CameraUi {
is_enabled: false, ..
}) = camera_ui
{
continue;
}
if let (Some(logical_size), Some(physical_size)) =
(camera.logical_target_size, camera.physical_target_size)
{
let mut projection = OrthographicProjection {
far: UI_CAMERA_FAR,
window_origin: WindowOrigin::BottomLeft,
depth_calculation: DepthCalculation::ZDifference,
..Default::default()
};
projection.update(logical_size.x, logical_size.y);
// This roundabout approach is required because spawn().id() won't work in this context
let default_camera_view = render_world.entities().reserve_entity();
commands
.get_or_spawn(default_camera_view)
.insert(ExtractedView {
projection: projection.get_projection_matrix(),
transform: GlobalTransform::from_xyz(
0.0,
0.0,
UI_CAMERA_FAR + UI_CAMERA_TRANSFORM_OFFSET,
),
width: physical_size.x,
height: physical_size.y,
});
commands.get_or_spawn(entity).insert_bundle((
DefaultCameraView(default_camera_view),
RenderPhase::<TransparentUi>::default(),
));
}
}
}
pub fn extract_text_uinodes(
mut render_world: ResMut<RenderWorld>,
texture_atlases: Res<Assets<TextureAtlas>>,
@ -447,7 +553,6 @@ pub fn queue_uinodes(
layout: &ui_pipeline.image_layout,
})
});
transparent_phase.add(TransparentUi {
draw_function: draw_ui_function,
pipeline,

View file

@ -1,9 +1,10 @@
use super::{UiBatch, UiImageBindGroups, UiMeta};
use crate::{prelude::CameraUi, DefaultCameraView};
use bevy_ecs::{
prelude::*,
system::{lifetimeless::*, SystemParamItem},
};
use bevy_render::{
camera::ActiveCamera,
render_graph::*,
render_phase::*,
render_resource::{
@ -14,30 +15,16 @@ use bevy_render::{
};
use bevy_utils::FloatOrd;
use crate::prelude::CameraUi;
use super::{draw_ui_graph, UiBatch, UiImageBindGroups, UiMeta};
pub struct UiPassDriverNode;
impl Node for UiPassDriverNode {
fn run(
&self,
graph: &mut RenderGraphContext,
_render_context: &mut RenderContext,
world: &World,
) -> Result<(), NodeRunError> {
if let Some(camera_ui) = world.resource::<ActiveCamera<CameraUi>>().get() {
graph.run_sub_graph(draw_ui_graph::NAME, vec![SlotValue::Entity(camera_ui)])?;
}
Ok(())
}
}
pub struct UiPassNode {
query:
QueryState<(&'static RenderPhase<TransparentUi>, &'static ViewTarget), With<ExtractedView>>,
ui_view_query: QueryState<
(
&'static RenderPhase<TransparentUi>,
&'static ViewTarget,
Option<&'static CameraUi>,
),
With<ExtractedView>,
>,
default_camera_view_query: QueryState<&'static DefaultCameraView>,
}
impl UiPassNode {
@ -45,7 +32,8 @@ impl UiPassNode {
pub fn new(world: &mut World) -> Self {
Self {
query: QueryState::new(world),
ui_view_query: world.query_filtered(),
default_camera_view_query: world.query(),
}
}
}
@ -56,7 +44,8 @@ impl Node for UiPassNode {
}
fn update(&mut self, world: &mut World) {
self.query.update_archetypes(world);
self.ui_view_query.update_archetypes(world);
self.default_camera_view_query.update_archetypes(world);
}
fn run(
@ -65,17 +54,31 @@ impl Node for UiPassNode {
render_context: &mut RenderContext,
world: &World,
) -> Result<(), NodeRunError> {
let view_entity = graph.get_input_entity(Self::IN_VIEW)?;
// If there is no view entity, do not try to process the render phase for the view
let (transparent_phase, target) = match self.query.get_manual(world, view_entity) {
Ok(it) => it,
_ => return Ok(()),
};
let input_view_entity = graph.get_input_entity(Self::IN_VIEW)?;
let (transparent_phase, target, camera_ui) =
if let Ok(result) = self.ui_view_query.get_manual(world, input_view_entity) {
result
} else {
return Ok(());
};
if transparent_phase.items.is_empty() {
return Ok(());
}
// Don't render UI for cameras where it is explicitly disabled
if let Some(&CameraUi { is_enabled: false }) = camera_ui {
return Ok(());
}
// use the "default" view entity if it is defined
let view_entity = if let Ok(default_view) = self
.default_camera_view_query
.get_manual(world, input_view_entity)
{
default_view.0
} else {
input_view_entity
};
let pass_descriptor = RenderPassDescriptor {
label: Some("ui_pass"),
color_attachments: &[RenderPassColorAttachment {

View file

@ -2,7 +2,7 @@ use bevy_math::{DVec2, IVec2, Vec2};
use bevy_utils::{tracing::warn, Uuid};
use raw_window_handle::RawWindowHandle;
#[derive(Copy, Clone, Debug, PartialEq, Eq, Hash)]
#[derive(Copy, Clone, Debug, PartialEq, Eq, Hash, PartialOrd, Ord)]
pub struct WindowId(Uuid);
/// Presentation mode for a window.

View file

@ -14,7 +14,7 @@ fn setup(
mut meshes: ResMut<Assets<Mesh>>,
mut materials: ResMut<Assets<ColorMaterial>>,
) {
commands.spawn_bundle(OrthographicCameraBundle::new_2d());
commands.spawn_bundle(Camera2dBundle::default());
commands.spawn_bundle(MaterialMesh2dBundle {
mesh: meshes.add(Mesh::from(shape::Quad::default())).into(),
transform: Transform::default().with_scale(Vec3::splat(128.)),

View file

@ -4,7 +4,7 @@
//! Check out the "mesh2d" example for simpler / higher level 2d meshes.
use bevy::{
core_pipeline::Transparent2d,
core_pipeline::core_2d::Transparent2d,
prelude::*,
reflect::TypeUuid,
render::{
@ -108,7 +108,7 @@ fn star(
));
commands
// And use an orthographic projection
.spawn_bundle(OrthographicCameraBundle::new_2d());
.spawn_bundle(Camera2dBundle::default());
}
/// A marker component for colored 2d meshes

View file

@ -30,7 +30,7 @@ fn setup(
// Insert the vertex colors as an attribute
mesh.insert_attribute(Mesh::ATTRIBUTE_COLOR, vertex_colors);
// Spawn
commands.spawn_bundle(OrthographicCameraBundle::new_2d());
commands.spawn_bundle(Camera2dBundle::default());
commands.spawn_bundle(MaterialMesh2dBundle {
mesh: meshes.add(mesh).into(),
transform: Transform::default().with_scale(Vec3::splat(128.)),

View file

@ -17,7 +17,7 @@ enum Direction {
}
fn setup(mut commands: Commands, asset_server: Res<AssetServer>) {
commands.spawn_bundle(OrthographicCameraBundle::new_2d());
commands.spawn_bundle(Camera2dBundle::default());
commands
.spawn_bundle(SpriteBundle {
texture: asset_server.load("branding/icon.png"),

View file

@ -59,7 +59,7 @@ fn setup(mut commands: Commands, asset_server: Res<AssetServer>) {
let enemy_b_handle = asset_server.load("textures/simplespace/enemy_B.png");
// 2D orthographic camera
commands.spawn_bundle(OrthographicCameraBundle::new_2d());
commands.spawn_bundle(Camera2dBundle::default());
let horizontal_margin = BOUNDS.x / 4.0;
let vertical_margin = BOUNDS.y / 4.0;

View file

@ -14,7 +14,7 @@ fn setup(
mut meshes: ResMut<Assets<Mesh>>,
mut materials: ResMut<Assets<ColorMaterial>>,
) {
commands.spawn_bundle(OrthographicCameraBundle::new_2d());
commands.spawn_bundle(Camera2dBundle::default());
// Rectangle
commands.spawn_bundle(SpriteBundle {

View file

@ -10,7 +10,7 @@ fn main() {
}
fn setup(mut commands: Commands, asset_server: Res<AssetServer>) {
commands.spawn_bundle(OrthographicCameraBundle::new_2d());
commands.spawn_bundle(Camera2dBundle::default());
commands.spawn_bundle(SpriteBundle {
texture: asset_server.load("branding/icon.png"),
..default()

View file

@ -10,7 +10,7 @@ fn main() {
}
fn setup(mut commands: Commands, asset_server: Res<AssetServer>) {
commands.spawn_bundle(OrthographicCameraBundle::new_2d());
commands.spawn_bundle(Camera2dBundle::default());
commands.spawn_bundle(SpriteBundle {
texture: asset_server.load("branding/icon.png"),
sprite: Sprite {

View file

@ -40,7 +40,7 @@ fn setup(
let texture_handle = asset_server.load("textures/rpg/chars/gabe/gabe-idle-run.png");
let texture_atlas = TextureAtlas::from_grid(texture_handle, Vec2::new(24.0, 24.0), 7, 1);
let texture_atlas_handle = texture_atlases.add(texture_atlas);
commands.spawn_bundle(OrthographicCameraBundle::new_2d());
commands.spawn_bundle(Camera2dBundle::default());
commands
.spawn_bundle(SpriteSheetBundle {
texture_atlas: texture_atlas_handle,

View file

@ -36,7 +36,7 @@ fn setup(mut commands: Commands, asset_server: Res<AssetServer>) {
horizontal: HorizontalAlign::Center,
};
// 2d camera
commands.spawn_bundle(OrthographicCameraBundle::new_2d());
commands.spawn_bundle(Camera2dBundle::default());
// Demonstrate changing translation
commands
.spawn_bundle(Text2dBundle {

View file

@ -62,7 +62,7 @@ fn setup(
let atlas_handle = texture_atlases.add(texture_atlas);
// set up a scene to display our texture atlas
commands.spawn_bundle(OrthographicCameraBundle::new_2d());
commands.spawn_bundle(Camera2dBundle::default());
// draw a sprite from the atlas
commands.spawn_bundle(SpriteSheetBundle {
transform: Transform {

View file

@ -39,7 +39,7 @@ fn setup(
..default()
});
// camera
commands.spawn_bundle(PerspectiveCameraBundle {
commands.spawn_bundle(Camera3dBundle {
transform: Transform::from_xyz(-2.0, 2.5, 5.0).looking_at(Vec3::ZERO, Vec3::Y),
..default()
});

View file

@ -203,7 +203,7 @@ fn setup(
});
// camera
commands.spawn_bundle(PerspectiveCameraBundle {
commands.spawn_bundle(Camera3dBundle {
transform: Transform::from_xyz(-2.0, 2.5, 5.0).looking_at(Vec3::ZERO, Vec3::Y),
..default()
});

View file

@ -16,7 +16,7 @@ fn main() {
fn setup(mut commands: Commands, asset_server: Res<AssetServer>) {
commands.spawn_scene(asset_server.load("models/FlightHelmet/FlightHelmet.gltf#Scene0"));
commands.spawn_bundle(PerspectiveCameraBundle {
commands.spawn_bundle(Camera3dBundle {
transform: Transform::from_xyz(0.7, 0.7, 1.0).looking_at(Vec3::new(0.0, 0.3, 0.0), Vec3::Y),
..default()
});

View file

@ -38,7 +38,7 @@ fn setup(
..default()
});
// camera
commands.spawn_bundle(PerspectiveCameraBundle {
commands.spawn_bundle(Camera3dBundle {
transform: Transform::from_xyz(-3.0, 3.0, 5.0).looking_at(Vec3::ZERO, Vec3::Y),
..default()
});

View file

@ -1,6 +1,6 @@
//! Shows how to create a 3D orthographic view (for isometric-look games or CAD applications).
use bevy::prelude::*;
use bevy::{prelude::*, render::camera::ScalingMode};
fn main() {
App::new()
@ -15,13 +15,17 @@ fn setup(
mut meshes: ResMut<Assets<Mesh>>,
mut materials: ResMut<Assets<StandardMaterial>>,
) {
// set up the camera
let mut camera = OrthographicCameraBundle::new_3d();
camera.orthographic_projection.scale = 3.0;
camera.transform = Transform::from_xyz(5.0, 5.0, 5.0).looking_at(Vec3::ZERO, Vec3::Y);
// camera
commands.spawn_bundle(camera);
commands.spawn_bundle(Camera3dBundle {
projection: OrthographicProjection {
scale: 3.0,
scaling_mode: ScalingMode::FixedVertical(2.0),
..default()
}
.into(),
transform: Transform::from_xyz(5.0, 5.0, 5.0).looking_at(Vec3::ZERO, Vec3::Y),
..default()
});
// plane
commands.spawn_bundle(PbrBundle {

View file

@ -58,7 +58,7 @@ fn setup(
..default()
});
// camera
commands.spawn_bundle(PerspectiveCameraBundle {
commands.spawn_bundle(Camera3dBundle {
transform: Transform::from_xyz(5.0, 10.0, 10.0).looking_at(Vec3::ZERO, Vec3::Y),
..default()
});

View file

@ -64,12 +64,13 @@ fn setup(
..default()
});
// camera
commands.spawn_bundle(OrthographicCameraBundle {
commands.spawn_bundle(Camera3dBundle {
transform: Transform::from_xyz(0.0, 0.0, 8.0).looking_at(Vec3::default(), Vec3::Y),
orthographic_projection: OrthographicProjection {
projection: OrthographicProjection {
scale: 0.01,
..default()
},
..OrthographicCameraBundle::new_3d()
}
.into(),
..default()
});
}

View file

@ -1,104 +1,24 @@
//! Shows how to render to a texture. Useful for mirrors, UI, or exporting images.
use bevy::{
core_pipeline::{
draw_3d_graph, node, AlphaMask3d, Opaque3d, RenderTargetClearColors, Transparent3d,
},
core_pipeline::clear_color::ClearColorConfig,
prelude::*,
render::{
camera::{ActiveCamera, Camera, CameraTypePlugin, RenderTarget},
render_graph::{Node, NodeRunError, RenderGraph, RenderGraphContext, SlotValue},
render_phase::RenderPhase,
camera::RenderTarget,
render_resource::{
Extent3d, TextureDescriptor, TextureDimension, TextureFormat, TextureUsages,
},
renderer::RenderContext,
view::RenderLayers,
RenderApp, RenderStage,
},
};
#[derive(Component, Default)]
pub struct FirstPassCamera;
// The name of the final node of the first pass.
pub const FIRST_PASS_DRIVER: &str = "first_pass_driver";
fn main() {
let mut app = App::new();
app.add_plugins(DefaultPlugins)
.add_plugin(CameraTypePlugin::<FirstPassCamera>::default())
App::new()
.add_plugins(DefaultPlugins)
.add_startup_system(setup)
.add_system(cube_rotator_system)
.add_system(rotator_system);
let render_app = app.sub_app_mut(RenderApp);
let driver = FirstPassCameraDriver::new(&mut render_app.world);
// This will add 3D render phases for the new camera.
render_app.add_system_to_stage(RenderStage::Extract, extract_first_pass_camera_phases);
let mut graph = render_app.world.resource_mut::<RenderGraph>();
// Add a node for the first pass.
graph.add_node(FIRST_PASS_DRIVER, driver);
// The first pass's dependencies include those of the main pass.
graph
.add_node_edge(node::MAIN_PASS_DEPENDENCIES, FIRST_PASS_DRIVER)
.unwrap();
// Insert the first pass node: CLEAR_PASS_DRIVER -> FIRST_PASS_DRIVER -> MAIN_PASS_DRIVER
graph
.add_node_edge(node::CLEAR_PASS_DRIVER, FIRST_PASS_DRIVER)
.unwrap();
graph
.add_node_edge(FIRST_PASS_DRIVER, node::MAIN_PASS_DRIVER)
.unwrap();
app.run();
}
// Add 3D render phases for FIRST_PASS_CAMERA.
fn extract_first_pass_camera_phases(
mut commands: Commands,
active: Res<ActiveCamera<FirstPassCamera>>,
) {
if let Some(entity) = active.get() {
commands.get_or_spawn(entity).insert_bundle((
RenderPhase::<Opaque3d>::default(),
RenderPhase::<AlphaMask3d>::default(),
RenderPhase::<Transparent3d>::default(),
));
}
}
// A node for the first pass camera that runs draw_3d_graph with this camera.
struct FirstPassCameraDriver {
query: QueryState<Entity, With<FirstPassCamera>>,
}
impl FirstPassCameraDriver {
pub fn new(render_world: &mut World) -> Self {
Self {
query: QueryState::new(render_world),
}
}
}
impl Node for FirstPassCameraDriver {
fn update(&mut self, world: &mut World) {
self.query.update_archetypes(world);
}
fn run(
&self,
graph: &mut RenderGraphContext,
_render_context: &mut RenderContext,
world: &World,
) -> Result<(), NodeRunError> {
for camera in self.query.iter_manual(world) {
graph.run_sub_graph(draw_3d_graph::NAME, vec![SlotValue::Entity(camera)])?;
}
Ok(())
}
.add_system(rotator_system)
.run();
}
// Marks the first pass cube (rendered to a texture.)
@ -114,7 +34,6 @@ fn setup(
mut meshes: ResMut<Assets<Mesh>>,
mut materials: ResMut<Assets<StandardMaterial>>,
mut images: ResMut<Assets<Image>>,
mut clear_colors: ResMut<RenderTargetClearColors>,
) {
let size = Extent3d {
width: 512,
@ -172,33 +91,22 @@ fn setup(
..default()
});
// First pass camera
let render_target = RenderTarget::Image(image_handle.clone());
clear_colors.insert(render_target.clone(), Color::WHITE);
commands
.spawn_bundle(PerspectiveCameraBundle::<FirstPassCamera> {
.spawn_bundle(Camera3dBundle {
camera_3d: Camera3d {
clear_color: ClearColorConfig::Custom(Color::WHITE),
},
camera: Camera {
target: render_target,
// render before the "main pass" camera
priority: -1,
target: RenderTarget::Image(image_handle.clone()),
..default()
},
transform: Transform::from_translation(Vec3::new(0.0, 0.0, 15.0))
.looking_at(Vec3::default(), Vec3::Y),
..PerspectiveCameraBundle::new()
..default()
})
.insert(first_pass_layer);
// NOTE: omitting the RenderLayers component for this camera may cause a validation error:
//
// thread 'main' panicked at 'wgpu error: Validation Error
//
// Caused by:
// In a RenderPass
// note: encoder = `<CommandBuffer-(0, 1, Metal)>`
// In a pass parameter
// note: command buffer = `<CommandBuffer-(0, 1, Metal)>`
// Attempted to use texture (5, 1, Metal) mips 0..1 layers 0..1 as a combination of COLOR_TARGET within a usage scope.
//
// This happens because the texture would be written and read in the same frame, which is not allowed.
// So either render layers must be used to avoid this, or the texture must be double buffered.
let cube_size = 4.0;
let cube_handle = meshes.add(Mesh::from(shape::Box::new(cube_size, cube_size, cube_size)));
@ -226,7 +134,7 @@ fn setup(
.insert(MainPassCube);
// The main pass camera.
commands.spawn_bundle(PerspectiveCameraBundle {
commands.spawn_bundle(Camera3dBundle {
transform: Transform::from_translation(Vec3::new(0.0, 0.0, 15.0))
.looking_at(Vec3::default(), Vec3::Y),
..default()

View file

@ -86,7 +86,7 @@ fn setup(
// camera
commands
.spawn_bundle(PerspectiveCameraBundle {
.spawn_bundle(Camera3dBundle {
transform: Transform::from_xyz(-1.0, 1.0, 1.0)
.looking_at(Vec3::new(-1.0, 1.0, 0.0), Vec3::Y),
..default()

View file

@ -111,7 +111,7 @@ fn setup(
});
// camera
commands.spawn_bundle(PerspectiveCameraBundle {
commands.spawn_bundle(Camera3dBundle {
transform: Transform::from_xyz(-5.0, 5.0, 5.0)
.looking_at(Vec3::new(-1.0, 1.0, 0.0), Vec3::Y),
..default()

View file

@ -78,7 +78,7 @@ fn setup(
..Default::default()
});
commands.spawn_bundle(PerspectiveCameraBundle {
commands.spawn_bundle(Camera3dBundle {
transform: Transform::from_xyz(0.0, 6., 12.0).looking_at(Vec3::new(0., 1., 0.), Vec3::Y),
..Default::default()
});

View file

@ -16,7 +16,7 @@ fn setup(
mut materials: ResMut<Assets<StandardMaterial>>,
) {
// camera
commands.spawn_bundle(PerspectiveCameraBundle {
commands.spawn_bundle(Camera3dBundle {
transform: Transform::from_xyz(1.0, 2.5, 5.0).looking_at(Vec3::ZERO, Vec3::Y),
..default()
});

View file

@ -87,7 +87,7 @@ fn setup(
..default()
});
// camera
commands.spawn_bundle(PerspectiveCameraBundle {
commands.spawn_bundle(Camera3dBundle {
transform: Transform::from_xyz(3.0, 5.0, 8.0).looking_at(Vec3::ZERO, Vec3::Y),
..default()
});

View file

@ -1,220 +1,60 @@
//! Shows how to render multiple passes to the same window, useful for rendering different views
//! or drawing an object on top regardless of depth.
//! Renders two 3d passes to the same window from different perspectives.
use bevy::{
core_pipeline::{draw_3d_graph, node, AlphaMask3d, Opaque3d, Transparent3d},
prelude::*,
render::{
camera::{ActiveCamera, Camera, CameraTypePlugin, RenderTarget},
render_graph::{Node, NodeRunError, RenderGraph, RenderGraphContext, SlotValue},
render_phase::RenderPhase,
renderer::RenderContext,
view::RenderLayers,
RenderApp, RenderStage,
},
window::WindowId,
};
// The name of the final node of the first pass.
pub const FIRST_PASS_DRIVER: &str = "first_pass_driver";
// Marks the camera that determines the view rendered in the first pass.
#[derive(Component, Default)]
struct FirstPassCamera;
use bevy::{core_pipeline::clear_color::ClearColorConfig, prelude::*};
fn main() {
let mut app = App::new();
app.add_plugins(DefaultPlugins)
.add_plugin(CameraTypePlugin::<FirstPassCamera>::default())
App::new()
.add_plugins(DefaultPlugins)
.add_startup_system(setup)
.add_system(cube_rotator_system)
.add_system(rotator_system)
.add_system(toggle_msaa);
let render_app = app.sub_app_mut(RenderApp);
let driver = FirstPassCameraDriver::new(&mut render_app.world);
// This will add 3D render phases for the new camera.
render_app.add_system_to_stage(RenderStage::Extract, extract_first_pass_camera_phases);
let mut graph = render_app.world.resource_mut::<RenderGraph>();
// Add a node for the first pass.
graph.add_node(FIRST_PASS_DRIVER, driver);
// The first pass's dependencies include those of the main pass.
graph
.add_node_edge(node::MAIN_PASS_DEPENDENCIES, FIRST_PASS_DRIVER)
.unwrap();
// Insert the first pass node: CLEAR_PASS_DRIVER -> FIRST_PASS_DRIVER -> MAIN_PASS_DRIVER
graph
.add_node_edge(node::CLEAR_PASS_DRIVER, FIRST_PASS_DRIVER)
.unwrap();
graph
.add_node_edge(FIRST_PASS_DRIVER, node::MAIN_PASS_DRIVER)
.unwrap();
app.run();
.run();
}
// Add 3D render phases for FirstPassCamera.
fn extract_first_pass_camera_phases(
mut commands: Commands,
active: Res<ActiveCamera<FirstPassCamera>>,
) {
if let Some(entity) = active.get() {
commands.get_or_spawn(entity).insert_bundle((
RenderPhase::<Opaque3d>::default(),
RenderPhase::<AlphaMask3d>::default(),
RenderPhase::<Transparent3d>::default(),
));
}
}
// A node for the first pass camera that runs draw_3d_graph with this camera.
struct FirstPassCameraDriver {
query: QueryState<Entity, With<FirstPassCamera>>,
}
impl FirstPassCameraDriver {
pub fn new(render_world: &mut World) -> Self {
Self {
query: QueryState::new(render_world),
}
}
}
impl Node for FirstPassCameraDriver {
fn update(&mut self, world: &mut World) {
self.query.update_archetypes(world);
}
fn run(
&self,
graph: &mut RenderGraphContext,
_render_context: &mut RenderContext,
world: &World,
) -> Result<(), NodeRunError> {
for camera in self.query.iter_manual(world) {
graph.run_sub_graph(draw_3d_graph::NAME, vec![SlotValue::Entity(camera)])?;
}
Ok(())
}
}
// Marks the first pass cube.
#[derive(Component)]
struct FirstPassCube;
// Marks the main pass cube.
#[derive(Component)]
struct MainPassCube;
/// set up a simple 3D scene
fn setup(
mut commands: Commands,
mut meshes: ResMut<Assets<Mesh>>,
mut materials: ResMut<Assets<StandardMaterial>>,
) {
let cube_handle = meshes.add(Mesh::from(shape::Cube { size: 4.0 }));
let cube_material_handle = materials.add(StandardMaterial {
base_color: Color::GREEN,
reflectance: 0.02,
unlit: false,
..Default::default()
// plane
commands.spawn_bundle(PbrBundle {
mesh: meshes.add(Mesh::from(shape::Plane { size: 5.0 })),
material: materials.add(Color::rgb(0.3, 0.5, 0.3).into()),
..default()
});
let split = 2.0;
// This specifies the layer used for the first pass, which will be attached to the first pass camera and cube.
let first_pass_layer = RenderLayers::layer(1);
// The first pass cube.
commands
.spawn_bundle(PbrBundle {
mesh: cube_handle,
material: cube_material_handle,
transform: Transform::from_translation(Vec3::new(-split, 0.0, 1.0)),
..Default::default()
})
.insert(FirstPassCube)
.insert(first_pass_layer);
// Light
// NOTE: Currently lights are shared between passes - see https://github.com/bevyengine/bevy/issues/3462
// cube
commands.spawn_bundle(PbrBundle {
mesh: meshes.add(Mesh::from(shape::Cube { size: 1.0 })),
material: materials.add(Color::rgb(0.8, 0.7, 0.6).into()),
transform: Transform::from_xyz(0.0, 0.5, 0.0),
..default()
});
// light
commands.spawn_bundle(PointLightBundle {
transform: Transform::from_translation(Vec3::new(0.0, 0.0, 10.0)),
..Default::default()
point_light: PointLight {
intensity: 1500.0,
shadows_enabled: true,
..default()
},
transform: Transform::from_xyz(4.0, 8.0, 4.0),
..default()
});
// camera
commands.spawn_bundle(Camera3dBundle {
transform: Transform::from_xyz(-2.0, 2.5, 5.0).looking_at(Vec3::ZERO, Vec3::Y),
..default()
});
// First pass camera
commands
.spawn_bundle(PerspectiveCameraBundle::<FirstPassCamera> {
camera: Camera {
target: RenderTarget::Window(WindowId::primary()),
..Default::default()
},
transform: Transform::from_translation(Vec3::new(0.0, 0.0, 15.0))
.looking_at(Vec3::default(), Vec3::Y),
..PerspectiveCameraBundle::new()
})
.insert(first_pass_layer);
let cube_size = 4.0;
let cube_handle = meshes.add(Mesh::from(shape::Box::new(cube_size, cube_size, cube_size)));
let material_handle = materials.add(StandardMaterial {
base_color: Color::RED,
reflectance: 0.02,
unlit: false,
..Default::default()
});
// Main pass cube.
commands
.spawn_bundle(PbrBundle {
mesh: cube_handle,
material: material_handle,
transform: Transform {
translation: Vec3::new(split, 0.0, -4.5),
rotation: Quat::from_rotation_x(-std::f32::consts::PI / 5.0),
..Default::default()
},
..Default::default()
})
.insert(MainPassCube);
// The main pass camera.
commands.spawn_bundle(PerspectiveCameraBundle {
transform: Transform::from_translation(Vec3::new(0.0, 0.0, 15.0))
.looking_at(Vec3::default(), Vec3::Y),
..Default::default()
// camera
commands.spawn_bundle(Camera3dBundle {
transform: Transform::from_xyz(10.0, 10., -5.0).looking_at(Vec3::ZERO, Vec3::Y),
camera_3d: Camera3d {
clear_color: ClearColorConfig::None,
},
camera: Camera {
// renders after / on top of the main camera
priority: 1,
..default()
},
..default()
});
}
/// Rotates the inner cube (first pass)
fn rotator_system(time: Res<Time>, mut query: Query<&mut Transform, With<FirstPassCube>>) {
for mut transform in query.iter_mut() {
transform.rotation *= Quat::from_rotation_x(1.5 * time.delta_seconds());
transform.rotation *= Quat::from_rotation_z(1.3 * time.delta_seconds());
}
}
/// Rotates the outer cube (main pass)
fn cube_rotator_system(time: Res<Time>, mut query: Query<&mut Transform, With<MainPassCube>>) {
for mut transform in query.iter_mut() {
transform.rotation *= Quat::from_rotation_x(1.0 * time.delta_seconds());
transform.rotation *= Quat::from_rotation_y(0.7 * time.delta_seconds());
}
}
fn toggle_msaa(input: Res<Input<KeyCode>>, mut msaa: ResMut<Msaa>) {
if input.just_pressed(KeyCode::M) {
if msaa.samples == 4 {
info!("Not using MSAA");
msaa.samples = 1;
} else {
info!("Using 4x MSAA");
msaa.samples = 4;
}
}
}

View file

@ -31,7 +31,7 @@ fn setup(
transform: Transform::from_xyz(4.0, 5.0, 4.0),
..default()
});
commands.spawn_bundle(PerspectiveCameraBundle {
commands.spawn_bundle(Camera3dBundle {
transform: Transform::from_xyz(1.05, 0.9, 1.5)
.looking_at(Vec3::new(0.0, 0.3, 0.0), Vec3::Y),
..default()

View file

@ -53,7 +53,7 @@ fn setup(
..default()
});
// camera
commands.spawn_bundle(PerspectiveCameraBundle {
commands.spawn_bundle(Camera3dBundle {
transform: Transform::from_xyz(-2.0, 2.5, 5.0).looking_at(Vec3::ZERO, Vec3::Y),
..default()
});

View file

@ -49,7 +49,7 @@ fn setup(
..default()
});
// camera
commands.spawn_bundle(PerspectiveCameraBundle {
commands.spawn_bundle(Camera3dBundle {
transform: Transform::from_xyz(-2.0, 2.5, 5.0).looking_at(Vec3::ZERO, Vec3::Y),
..default()
});

View file

@ -110,11 +110,11 @@ Example | File | Description
`parenting` | [`3d/parenting.rs`](./3d/parenting.rs) | Demonstrates parent->child relationships and relative transformations
`pbr` | [`3d/pbr.rs`](./3d/pbr.rs) | Demonstrates use of Physically Based Rendering (PBR) properties
`render_to_texture` | [`3d/render_to_texture.rs`](./3d/render_to_texture.rs) | Shows how to render to a texture, useful for mirrors, UI, or exporting images
`two_passes` | [`3d/two_passes.rs`](./3d/two_passes.rs) | Shows how to render multiple passes to the same window, useful for rendering different views or drawing an object on top regardless of depth
`shadow_caster_receiver` | [`3d/shadow_caster_receiver.rs`](./3d/shadow_caster_receiver.rs) | Demonstrates how to prevent meshes from casting/receiving shadows in a 3d scene
`shadow_biases` | [`3d/shadow_biases.rs`](./3d/shadow_biases.rs) | Demonstrates how shadow biases affect shadows in a 3d scene
`spherical_area_lights` | [`3d/spherical_area_lights.rs`](./3d/spherical_area_lights.rs) | Demonstrates how point light radius values affect light behavior.
`texture` | [`3d/texture.rs`](./3d/texture.rs) | Shows configuration of texture materials
`two_passes` | [`3d/two_passes.rs`](./3d/two_passes.rs) | Renders two 3d passes to the same window from different perspectives.
`update_gltf_scene` | [`3d/update_gltf_scene.rs`](./3d/update_gltf_scene.rs) | Update a scene from a gltf file, either by spawning the scene as a child of another entity, or by accessing the entities of the scene
`vertex_colors` | [`3d/vertex_colors.rs`](./3d/vertex_colors.rs) | Shows the use of vertex colors
`wireframe` | [`3d/wireframe.rs`](./3d/wireframe.rs) | Showcases wireframe rendering

View file

@ -34,7 +34,7 @@ fn setup(
..default()
});
// camera
commands.spawn_bundle(PerspectiveCameraBundle {
commands.spawn_bundle(Camera3dBundle {
transform: Transform::from_xyz(-2.0, 2.5, 5.0).looking_at(Vec3::ZERO, Vec3::Y),
..default()
});

View file

@ -32,7 +32,7 @@ fn setup(
]));
// Camera
commands.spawn_bundle(PerspectiveCameraBundle {
commands.spawn_bundle(Camera3dBundle {
transform: Transform::from_xyz(100.0, 100.0, 150.0)
.looking_at(Vec3::new(0.0, 20.0, 0.0), Vec3::Y),
..Default::default()

View file

@ -22,7 +22,7 @@ fn setup(
mut animations: ResMut<Assets<AnimationClip>>,
) {
// Camera
commands.spawn_bundle(PerspectiveCameraBundle {
commands.spawn_bundle(Camera3dBundle {
transform: Transform::from_xyz(-2.0, 2.5, 5.0).looking_at(Vec3::ZERO, Vec3::Y),
..default()
});

View file

@ -39,7 +39,7 @@ fn setup(
mut skinned_mesh_inverse_bindposes_assets: ResMut<Assets<SkinnedMeshInverseBindposes>>,
) {
// Create a camera
commands.spawn_bundle(PerspectiveCameraBundle {
commands.spawn_bundle(Camera3dBundle {
transform: Transform::from_xyz(-2.0, 2.5, 5.0).looking_at(Vec3::ZERO, Vec3::Y),
..default()
});

View file

@ -19,7 +19,7 @@ fn main() {
fn setup(mut commands: Commands, asset_server: Res<AssetServer>) {
// Create a camera
commands.spawn_bundle(PerspectiveCameraBundle {
commands.spawn_bundle(Camera3dBundle {
transform: Transform::from_xyz(-2.0, 2.5, 5.0).looking_at(Vec3::ZERO, Vec3::Y),
..default()
});

View file

@ -11,5 +11,5 @@ fn main() {
}
fn setup_system(mut commands: Commands) {
commands.spawn_bundle(PerspectiveCameraBundle::default());
commands.spawn_bundle(Camera3dBundle::default());
}

View file

@ -76,7 +76,7 @@ fn setup(
..default()
});
// camera
commands.spawn_bundle(PerspectiveCameraBundle {
commands.spawn_bundle(Camera3dBundle {
transform: Transform::from_xyz(0.0, 3.0, 10.0).looking_at(Vec3::ZERO, Vec3::Y),
..default()
});

View file

@ -89,7 +89,7 @@ fn main() {
}
fn setup(mut commands: Commands, asset_server: Res<AssetServer>) {
commands.spawn_bundle(OrthographicCameraBundle::new_2d());
commands.spawn_bundle(Camera2dBundle::default());
commands.spawn_bundle(SpriteBundle {
texture: asset_server.load("branding/icon.png"),
..default()

View file

@ -31,7 +31,7 @@ fn setup(mut commands: Commands, asset_server: Res<AssetServer>) {
..default()
});
// camera
commands.spawn_bundle(PerspectiveCameraBundle {
commands.spawn_bundle(Camera3dBundle {
transform: Transform::from_xyz(2.0, 2.0, 6.0).looking_at(Vec3::ZERO, Vec3::Y),
..default()
});

View file

@ -117,7 +117,7 @@ fn setup_env(mut commands: Commands) {
});
// camera
commands.spawn_bundle(PerspectiveCameraBundle {
commands.spawn_bundle(Camera3dBundle {
transform: Transform::from_xyz(offset, offset, 15.0)
.looking_at(Vec3::new(offset, offset, 0.0), Vec3::Y),
..default()

View file

@ -25,7 +25,7 @@ struct StreamEvent(u32);
struct LoadedFont(Handle<Font>);
fn setup(mut commands: Commands, asset_server: Res<AssetServer>) {
commands.spawn_bundle(OrthographicCameraBundle::new_2d());
commands.spawn_bundle(Camera2dBundle::default());
let (tx, rx) = bounded::<u32>(10);
std::thread::spawn(move || loop {

View file

@ -11,7 +11,7 @@ fn main() {
}
fn setup(mut commands: Commands, asset_server: Res<AssetServer>) {
commands.spawn_bundle(OrthographicCameraBundle::new_2d());
commands.spawn_bundle(Camera2dBundle::default());
let texture = asset_server.load("branding/icon.png");
// Spawn a root entity with no parent

View file

@ -141,7 +141,7 @@ fn generate_bodies(
..default()
});
});
commands.spawn_bundle(PerspectiveCameraBundle {
commands.spawn_bundle(Camera3dBundle {
transform: Transform::from_xyz(0.0, 10.5, -30.0).looking_at(Vec3::ZERO, Vec3::Y),
..default()
});

View file

@ -7,7 +7,7 @@ use rand::random;
struct Velocity(Vec2);
fn spawn_system(mut commands: Commands, asset_server: Res<AssetServer>) {
commands.spawn_bundle(OrthographicCameraBundle::new_2d());
commands.spawn_bundle(Camera2dBundle::default());
let texture = asset_server.load("branding/icon.png");
for _ in 0..128 {
commands

View file

@ -27,7 +27,7 @@ fn main() {
struct MyComponent;
fn setup(mut commands: Commands, asset_server: Res<AssetServer>) {
commands.spawn_bundle(OrthographicCameraBundle::new_2d());
commands.spawn_bundle(Camera2dBundle::default());
commands
.spawn_bundle(SpriteBundle {
texture: asset_server.load("branding/icon.png"),

View file

@ -7,6 +7,7 @@ fn main() {
App::new()
.add_plugins(DefaultPlugins)
.add_state(AppState::Menu)
.add_startup_system(setup)
.add_system_set(SystemSet::on_enter(AppState::Menu).with_system(setup_menu))
.add_system_set(SystemSet::on_update(AppState::Menu).with_system(menu))
.add_system_set(SystemSet::on_exit(AppState::Menu).with_system(cleanup_menu))
@ -33,9 +34,11 @@ const NORMAL_BUTTON: Color = Color::rgb(0.15, 0.15, 0.15);
const HOVERED_BUTTON: Color = Color::rgb(0.25, 0.25, 0.25);
const PRESSED_BUTTON: Color = Color::rgb(0.35, 0.75, 0.35);
fn setup(mut commands: Commands) {
commands.spawn_bundle(Camera2dBundle::default());
}
fn setup_menu(mut commands: Commands, asset_server: Res<AssetServer>) {
// ui camera
commands.spawn_bundle(UiCameraBundle::default());
let button_entity = commands
.spawn_bundle(ButtonBundle {
style: Style {
@ -97,7 +100,6 @@ fn cleanup_menu(mut commands: Commands, menu_data: Res<MenuData>) {
}
fn setup_game(mut commands: Commands, asset_server: Res<AssetServer>) {
commands.spawn_bundle(OrthographicCameraBundle::new_2d());
commands.spawn_bundle(SpriteBundle {
texture: asset_server.load("branding/icon.png"),
..default()

View file

@ -1,6 +1,6 @@
//! Eat the cakes. Eat them all. An example 3D game.
use bevy::{ecs::schedule::SystemSet, prelude::*, render::camera::Camera3d, time::FixedTimestep};
use bevy::{ecs::schedule::SystemSet, prelude::*, time::FixedTimestep};
use rand::Rng;
#[derive(Clone, Eq, PartialEq, Debug, Hash)]
@ -79,7 +79,7 @@ const RESET_FOCUS: [f32; 3] = [
fn setup_cameras(mut commands: Commands, mut game: ResMut<Game>) {
game.camera_should_focus = Vec3::from(RESET_FOCUS);
game.camera_is_focus = game.camera_should_focus;
commands.spawn_bundle(PerspectiveCameraBundle {
commands.spawn_bundle(Camera3dBundle {
transform: Transform::from_xyz(
-(BOARD_SIZE_I as f32 / 2.0),
2.0 * BOARD_SIZE_J as f32 / 3.0,
@ -88,7 +88,6 @@ fn setup_cameras(mut commands: Commands, mut game: ResMut<Game>) {
.looking_at(game.camera_is_focus, Vec3::Y),
..default()
});
commands.spawn_bundle(UiCameraBundle::default());
}
fn setup(mut commands: Commands, asset_server: Res<AssetServer>, mut game: ResMut<Game>) {

View file

@ -172,9 +172,8 @@ struct Scoreboard {
// Add the game's entities to our world
fn setup(mut commands: Commands, asset_server: Res<AssetServer>) {
// Cameras
commands.spawn_bundle(OrthographicCameraBundle::new_2d());
commands.spawn_bundle(UiCameraBundle::default());
// Camera
commands.spawn_bundle(Camera2dBundle::default());
// Sound
let ball_collision_sound = asset_server.load("sounds/breakout_collision.ogg");

View file

@ -132,8 +132,7 @@ fn setup_contributor_selection(mut commands: Commands, asset_server: Res<AssetSe
}
fn setup(mut commands: Commands, asset_server: Res<AssetServer>) {
commands.spawn_bundle(OrthographicCameraBundle::new_2d());
commands.spawn_bundle(UiCameraBundle::default());
commands.spawn_bundle(Camera2dBundle::default());
commands
.spawn()

View file

@ -42,9 +42,8 @@ fn main() {
.run();
}
// As there isn't an actual game, setup is just adding a `UiCameraBundle`
fn setup(mut commands: Commands) {
commands.spawn_bundle(UiCameraBundle::default());
commands.spawn_bundle(Camera2dBundle::default());
}
mod splash {

View file

@ -82,7 +82,7 @@ fn setup_scene(
..default()
});
// camera
commands.spawn_bundle(PerspectiveCameraBundle {
commands.spawn_bundle(Camera3dBundle {
transform: Transform::from_xyz(-2.0, 2.5, 5.0).looking_at(Vec3::ZERO, Vec3::Y),
..default()
});

View file

@ -104,7 +104,7 @@ fn save_scene_system(world: &mut World) {
// This is only necessary for the info message in the UI. See examples/ui/text.rs for a standalone
// text example.
fn infotext_system(mut commands: Commands, asset_server: Res<AssetServer>) {
commands.spawn_bundle(UiCameraBundle::default());
commands.spawn_bundle(Camera2dBundle::default());
commands.spawn_bundle(TextBundle {
style: Style {
align_self: AlignSelf::FlexEnd,

View file

@ -3,7 +3,7 @@
//! This example uses a specialized pipeline.
use bevy::{
core_pipeline::Transparent3d,
core_pipeline::core_3d::Transparent3d,
ecs::system::{lifetimeless::SRes, SystemParamItem},
pbr::{
DrawMesh, MeshPipeline, MeshPipelineKey, MeshUniform, SetMeshBindGroup,
@ -44,7 +44,7 @@ fn setup(mut commands: Commands, mut meshes: ResMut<Assets<Mesh>>) {
));
// camera
commands.spawn_bundle(PerspectiveCameraBundle {
commands.spawn_bundle(Camera3dBundle {
transform: Transform::from_xyz(-2.0, 2.5, 5.0).looking_at(Vec3::ZERO, Vec3::Y),
..default()
});

View file

@ -4,7 +4,6 @@
//! is rendered to the screen.
use bevy::{
core_pipeline::node::MAIN_PASS_DEPENDENCIES,
prelude::*,
render::{
extract_resource::{ExtractResource, ExtractResourcePlugin},
@ -58,7 +57,7 @@ fn setup(mut commands: Commands, mut images: ResMut<Assets<Image>>) {
texture: image.clone(),
..default()
});
commands.spawn_bundle(OrthographicCameraBundle::new_2d());
commands.spawn_bundle(Camera2dBundle::default());
commands.insert_resource(GameOfLifeImage(image));
}
@ -78,7 +77,10 @@ impl Plugin for GameOfLifeComputePlugin {
let mut render_graph = render_app.world.resource_mut::<RenderGraph>();
render_graph.add_node("game_of_life", GameOfLifeNode::default());
render_graph
.add_node_edge("game_of_life", MAIN_PASS_DEPENDENCIES)
.add_node_edge(
"game_of_life",
bevy::render::main_graph::node::CAMERA_DRIVER,
)
.unwrap();
}
}

View file

@ -55,7 +55,7 @@ fn setup(
});
// camera
commands.spawn_bundle(PerspectiveCameraBundle {
commands.spawn_bundle(Camera3dBundle {
transform: Transform::from_xyz(-2.0, 2.5, 5.0).looking_at(Vec3::ZERO, Vec3::Y),
..default()
});

View file

@ -1,7 +1,7 @@
//! A shader that uses "shaders defs" (a bevy tool to selectively toggle parts of a shader)
use bevy::{
core_pipeline::Transparent3d,
core_pipeline::core_3d::Transparent3d,
pbr::{
DrawMesh, MeshPipeline, MeshPipelineKey, MeshUniform, SetMeshBindGroup,
SetMeshViewBindGroup,
@ -78,7 +78,7 @@ fn setup(mut commands: Commands, mut meshes: ResMut<Assets<Mesh>>) {
));
// camera
commands.spawn_bundle(PerspectiveCameraBundle {
commands.spawn_bundle(Camera3dBundle {
transform: Transform::from_xyz(-2.0, 2.5, 5.0).looking_at(Vec3::ZERO, Vec3::Y),
..default()
});

View file

@ -1,7 +1,7 @@
//! A shader that renders a mesh multiple times in one draw call.
use bevy::{
core_pipeline::Transparent3d,
core_pipeline::core_3d::Transparent3d,
ecs::system::{lifetimeless::*, SystemParamItem},
math::prelude::*,
pbr::{MeshPipeline, MeshPipelineKey, MeshUniform, SetMeshBindGroup, SetMeshViewBindGroup},
@ -58,7 +58,7 @@ fn setup(mut commands: Commands, mut meshes: ResMut<Assets<Mesh>>) {
));
// camera
commands.spawn_bundle(PerspectiveCameraBundle {
commands.spawn_bundle(Camera3dBundle {
transform: Transform::from_xyz(0.0, 0.0, 15.0).looking_at(Vec3::ZERO, Vec3::Y),
..default()
});

View file

@ -42,7 +42,7 @@ fn setup(
});
// camera
commands.spawn_bundle(PerspectiveCameraBundle {
commands.spawn_bundle(Camera3dBundle {
transform: Transform::from_xyz(-2.0, 2.5, 5.0).looking_at(Vec3::ZERO, Vec3::Y),
..default()
});

View file

@ -38,7 +38,7 @@ fn setup(
});
// camera
commands.spawn_bundle(PerspectiveCameraBundle {
commands.spawn_bundle(Camera3dBundle {
transform: Transform::from_xyz(-2.0, 2.5, 5.0).looking_at(Vec3::ZERO, Vec3::Y),
..default()
});

View file

@ -44,10 +44,6 @@ fn setup(
transform: Transform::from_xyz(4.0, 8.0, 4.0),
..default()
});
commands.spawn_bundle(PerspectiveCameraBundle {
transform: Transform::from_xyz(0.0, 2.5, 1.0).looking_at(Vec3::default(), Vec3::Y),
..default()
});
commands.spawn().insert_bundle(MaterialMeshBundle {
mesh: meshes.add(Mesh::from(shape::Cube { size: 1.0 })),
@ -62,7 +58,7 @@ fn setup(
// camera
commands
.spawn_bundle(PerspectiveCameraBundle {
.spawn_bundle(Camera3dBundle {
transform: Transform::from_xyz(4.0, 2.5, 4.0).looking_at(Vec3::ZERO, Vec3::Y),
..default()
})

View file

@ -92,8 +92,7 @@ struct StatsText;
fn setup(mut commands: Commands, asset_server: Res<AssetServer>) {
let texture = asset_server.load("branding/icon.png");
commands.spawn_bundle(OrthographicCameraBundle::new_2d());
commands.spawn_bundle(UiCameraBundle::default());
commands.spawn_bundle(Camera2dBundle::default());
commands
.spawn_bundle(TextBundle {
text: Text {

View file

@ -61,7 +61,7 @@ fn setup(
}
// camera
commands.spawn_bundle(PerspectiveCameraBundle::default());
commands.spawn_bundle(Camera3dBundle::default());
}
_ => {
// NOTE: This pattern is good for demonstrating that frustum culling is working correctly
@ -104,7 +104,7 @@ fn setup(
}
}
// camera
commands.spawn_bundle(PerspectiveCameraBundle {
commands.spawn_bundle(Camera3dBundle {
transform: Transform::from_xyz(WIDTH as f32, HEIGHT as f32, WIDTH as f32),
..default()
});

View file

@ -145,7 +145,7 @@ fn setup(
radius * 0.5 * zoom,
radius * 1.5 * zoom,
);
commands.spawn_bundle(PerspectiveCameraBundle {
commands.spawn_bundle(Camera3dBundle {
transform: Transform::from_translation(translation)
.looking_at(0.2 * Vec3::new(translation.x, 0.0, translation.z), Vec3::Y),
..Default::default()

View file

@ -6,7 +6,7 @@ use bevy::{
math::{DVec2, DVec3},
pbr::{ExtractedPointLight, GlobalLightMeta},
prelude::*,
render::{camera::CameraProjection, primitives::Frustum, RenderApp, RenderStage},
render::{camera::ScalingMode, RenderApp, RenderStage},
};
use rand::{thread_rng, Rng};
@ -77,21 +77,16 @@ fn setup(
// camera
match std::env::args().nth(1).as_deref() {
Some("orthographic") => {
let mut orthographic_camera_bundle = OrthographicCameraBundle::new_3d();
orthographic_camera_bundle.orthographic_projection.scale = 20.0;
let view_projection = orthographic_camera_bundle
.orthographic_projection
.get_projection_matrix();
orthographic_camera_bundle.frustum = Frustum::from_view_projection(
&view_projection,
&Vec3::ZERO,
&Vec3::Z,
orthographic_camera_bundle.orthographic_projection.far(),
);
commands.spawn_bundle(orthographic_camera_bundle)
}
_ => commands.spawn_bundle(PerspectiveCameraBundle::default()),
Some("orthographic") => commands.spawn_bundle(Camera3dBundle {
projection: OrthographicProjection {
scale: 20.0,
scaling_mode: ScalingMode::FixedHorizontal(1.0),
..default()
}
.into(),
..default()
}),
_ => commands.spawn_bundle(Camera3dBundle::default()),
};
// add one cube, the only one with strong handles

Some files were not shown because too many files have changed in this diff Show more