# Objective
- Closes#4464
## Solution
- Specify default mag and min filter types for `Image` instead of using `wgpu`'s defaults.
---
## Changelog
### Changed
- Default `Image` filtering changed from `Nearest` to `Linear`.
Co-authored-by: Carter Anderson <mcanders1@gmail.com>
# Objective
- fix#4946
- fix running 3d in wasm
## Solution
- since #4867, the imports are splitter differently, and this shader def was not always set correctly depending on the shader used
- add it when needed
# Objective
Users should be able to render cameras to specific areas of a render target, which enables scenarios like split screen, minimaps, etc.
Builds on the new Camera Driven Rendering added here: #4745Fixes: #202
Alternative to #1389 and #3626 (which are incompatible with the new Camera Driven Rendering)
## Solution
![image](https://user-images.githubusercontent.com/2694663/171560044-f0694f67-0cd9-4598-83e2-a9658c4fed57.png)
Cameras can now configure an optional "viewport", which defines a rectangle within their render target to draw to. If a `Viewport` is defined, the camera's `CameraProjection`, `View`, and visibility calculations will use the viewport configuration instead of the full render target.
```rust
// This camera will render to the first half of the primary window (on the left side).
commands.spawn_bundle(Camera3dBundle {
camera: Camera {
viewport: Some(Viewport {
physical_position: UVec2::new(0, 0),
physical_size: UVec2::new(window.physical_width() / 2, window.physical_height()),
depth: 0.0..1.0,
}),
..default()
},
..default()
});
```
To account for this, the `Camera` component has received a few adjustments:
* `Camera` now has some new getter functions:
* `logical_viewport_size`, `physical_viewport_size`, `logical_target_size`, `physical_target_size`, `projection_matrix`
* All computed camera values are now private and live on the `ComputedCameraValues` field (logical/physical width/height, the projection matrix). They are now exposed on `Camera` via getters/setters This wasn't _needed_ for viewports, but it was long overdue.
---
## Changelog
### Added
* `Camera` components now have a `viewport` field, which can be set to draw to a portion of a render target instead of the full target.
* `Camera` component has some new functions: `logical_viewport_size`, `physical_viewport_size`, `logical_target_size`, `physical_target_size`, and `projection_matrix`
* Added a new split_screen example illustrating how to render two cameras to the same scene
## Migration Guide
`Camera::projection_matrix` is no longer a public field. Use the new `Camera::projection_matrix()` method instead:
```rust
// Bevy 0.7
let projection = camera.projection_matrix;
// Bevy 0.8
let projection = camera.projection_matrix();
```
This adds "high level camera driven rendering" to Bevy. The goal is to give users more control over what gets rendered (and where) without needing to deal with render logic. This will make scenarios like "render to texture", "multiple windows", "split screen", "2d on 3d", "3d on 2d", "pass layering", and more significantly easier.
Here is an [example of a 2d render sandwiched between two 3d renders (each from a different perspective)](https://gist.github.com/cart/4fe56874b2e53bc5594a182fc76f4915):
![image](https://user-images.githubusercontent.com/2694663/168411086-af13dec8-0093-4a84-bdd4-d4362d850ffa.png)
Users can now spawn a camera, point it at a RenderTarget (a texture or a window), and it will "just work".
Rendering to a second window is as simple as spawning a second camera and assigning it to a specific window id:
```rust
// main camera (main window)
commands.spawn_bundle(Camera2dBundle::default());
// second camera (other window)
commands.spawn_bundle(Camera2dBundle {
camera: Camera {
target: RenderTarget::Window(window_id),
..default()
},
..default()
});
```
Rendering to a texture is as simple as pointing the camera at a texture:
```rust
commands.spawn_bundle(Camera2dBundle {
camera: Camera {
target: RenderTarget::Texture(image_handle),
..default()
},
..default()
});
```
Cameras now have a "render priority", which controls the order they are drawn in. If you want to use a camera's output texture as a texture in the main pass, just set the priority to a number lower than the main pass camera (which defaults to `0`).
```rust
// main pass camera with a default priority of 0
commands.spawn_bundle(Camera2dBundle::default());
commands.spawn_bundle(Camera2dBundle {
camera: Camera {
target: RenderTarget::Texture(image_handle.clone()),
priority: -1,
..default()
},
..default()
});
commands.spawn_bundle(SpriteBundle {
texture: image_handle,
..default()
})
```
Priority can also be used to layer to cameras on top of each other for the same RenderTarget. This is what "2d on top of 3d" looks like in the new system:
```rust
commands.spawn_bundle(Camera3dBundle::default());
commands.spawn_bundle(Camera2dBundle {
camera: Camera {
// this will render 2d entities "on top" of the default 3d camera's render
priority: 1,
..default()
},
..default()
});
```
There is no longer the concept of a global "active camera". Resources like `ActiveCamera<Camera2d>` and `ActiveCamera<Camera3d>` have been replaced with the camera-specific `Camera::is_active` field. This does put the onus on users to manage which cameras should be active.
Cameras are now assigned a single render graph as an "entry point", which is configured on each camera entity using the new `CameraRenderGraph` component. The old `PerspectiveCameraBundle` and `OrthographicCameraBundle` (generic on camera marker components like Camera2d and Camera3d) have been replaced by `Camera3dBundle` and `Camera2dBundle`, which set 3d and 2d default values for the `CameraRenderGraph` and projections.
```rust
// old 3d perspective camera
commands.spawn_bundle(PerspectiveCameraBundle::default())
// new 3d perspective camera
commands.spawn_bundle(Camera3dBundle::default())
```
```rust
// old 2d orthographic camera
commands.spawn_bundle(OrthographicCameraBundle::new_2d())
// new 2d orthographic camera
commands.spawn_bundle(Camera2dBundle::default())
```
```rust
// old 3d orthographic camera
commands.spawn_bundle(OrthographicCameraBundle::new_3d())
// new 3d orthographic camera
commands.spawn_bundle(Camera3dBundle {
projection: OrthographicProjection {
scale: 3.0,
scaling_mode: ScalingMode::FixedVertical,
..default()
}.into(),
..default()
})
```
Note that `Camera3dBundle` now uses a new `Projection` enum instead of hard coding the projection into the type. There are a number of motivators for this change: the render graph is now a part of the bundle, the way "generic bundles" work in the rust type system prevents nice `..default()` syntax, and changing projections at runtime is much easier with an enum (ex for editor scenarios). I'm open to discussing this choice, but I'm relatively certain we will all come to the same conclusion here. Camera2dBundle and Camera3dBundle are much clearer than being generic on marker components / using non-default constructors.
If you want to run a custom render graph on a camera, just set the `CameraRenderGraph` component:
```rust
commands.spawn_bundle(Camera3dBundle {
camera_render_graph: CameraRenderGraph::new(some_render_graph_name),
..default()
})
```
Just note that if the graph requires data from specific components to work (such as `Camera3d` config, which is provided in the `Camera3dBundle`), make sure the relevant components have been added.
Speaking of using components to configure graphs / passes, there are a number of new configuration options:
```rust
commands.spawn_bundle(Camera3dBundle {
camera_3d: Camera3d {
// overrides the default global clear color
clear_color: ClearColorConfig::Custom(Color::RED),
..default()
},
..default()
})
commands.spawn_bundle(Camera3dBundle {
camera_3d: Camera3d {
// disables clearing
clear_color: ClearColorConfig::None,
..default()
},
..default()
})
```
Expect to see more of the "graph configuration Components on Cameras" pattern in the future.
By popular demand, UI no longer requires a dedicated camera. `UiCameraBundle` has been removed. `Camera2dBundle` and `Camera3dBundle` now both default to rendering UI as part of their own render graphs. To disable UI rendering for a camera, disable it using the CameraUi component:
```rust
commands
.spawn_bundle(Camera3dBundle::default())
.insert(CameraUi {
is_enabled: false,
..default()
})
```
## Other Changes
* The separate clear pass has been removed. We should revisit this for things like sky rendering, but I think this PR should "keep it simple" until we're ready to properly support that (for code complexity and performance reasons). We can come up with the right design for a modular clear pass in a followup pr.
* I reorganized bevy_core_pipeline into Core2dPlugin and Core3dPlugin (and core_2d / core_3d modules). Everything is pretty much the same as before, just logically separate. I've moved relevant types (like Camera2d, Camera3d, Camera3dBundle, Camera2dBundle) into their relevant modules, which is what motivated this reorganization.
* I adapted the `scene_viewer` example (which relied on the ActiveCameras behavior) to the new system. I also refactored bits and pieces to be a bit simpler.
* All of the examples have been ported to the new camera approach. `render_to_texture` and `multiple_windows` are now _much_ simpler. I removed `two_passes` because it is less relevant with the new approach. If someone wants to add a new "layered custom pass with CameraRenderGraph" example, that might fill a similar niche. But I don't feel much pressure to add that in this pr.
* Cameras now have `target_logical_size` and `target_physical_size` fields, which makes finding the size of a camera's render target _much_ simpler. As a result, the `Assets<Image>` and `Windows` parameters were removed from `Camera::world_to_screen`, making that operation much more ergonomic.
* Render order ambiguities between cameras with the same target and the same priority now produce a warning. This accomplishes two goals:
1. Now that there is no "global" active camera, by default spawning two cameras will result in two renders (one covering the other). This would be a silent performance killer that would be hard to detect after the fact. By detecting ambiguities, we can provide a helpful warning when this occurs.
2. Render order ambiguities could result in unexpected / unpredictable render results. Resolving them makes sense.
## Follow Up Work
* Per-Camera viewports, which will make it possible to render to a smaller area inside of a RenderTarget (great for something like splitscreen)
* Camera-specific MSAA config (should use the same "overriding" pattern used for ClearColor)
* Graph Based Camera Ordering: priorities are simple, but they make complicated ordering constraints harder to express. We should consider adopting a "graph based" camera ordering model with "before" and "after" relationships to other cameras (or build it "on top" of the priority system).
* Consider allowing graphs to run subgraphs from any nest level (aka a global namespace for graphs). Right now the 2d and 3d graphs each need their own UI subgraph, which feels "fine" in the short term. But being able to share subgraphs between other subgraphs seems valuable.
* Consider splitting `bevy_core_pipeline` into `bevy_core_2d` and `bevy_core_3d` packages. Theres a shared "clear color" dependency here, which would need a new home.
# Objective
- Split PBR and 2D mesh shaders into types and bindings to prepare the shaders to be more reusable.
- See #3969 for details. I'm doing this in multiple steps to make review easier.
---
## Changelog
- Changed: 2D and PBR mesh shaders are now split into types and bindings, the following shader imports are available: `bevy_pbr::mesh_view_types`, `bevy_pbr::mesh_view_bindings`, `bevy_pbr::mesh_types`, `bevy_pbr::mesh_bindings`, `bevy_sprite::mesh2d_view_types`, `bevy_sprite::mesh2d_view_bindings`, `bevy_sprite::mesh2d_types`, `bevy_sprite::mesh2d_bindings`
## Migration Guide
- In shaders for 3D meshes:
- `#import bevy_pbr::mesh_view_bind_group` -> `#import bevy_pbr::mesh_view_bindings`
- `#import bevy_pbr::mesh_struct` -> `#import bevy_pbr::mesh_types`
- NOTE: If you are using the mesh bind group at bind group index 2, you can remove those binding statements in your shader and just use `#import bevy_pbr::mesh_bindings` which itself imports the mesh types needed for the bindings.
- In shaders for 2D meshes:
- `#import bevy_sprite::mesh2d_view_bind_group` -> `#import bevy_sprite::mesh2d_view_bindings`
- `#import bevy_sprite::mesh2d_struct` -> `#import bevy_sprite::mesh2d_types`
- NOTE: If you are using the mesh2d bind group at bind group index 2, you can remove those binding statements in your shader and just use `#import bevy_sprite::mesh2d_bindings` which itself imports the mesh2d types needed for the bindings.
# Objective
Models can be produced that do not have vertex tangents but do have normal map textures. The tangents can be generated. There is a way that the vertex tangents can be generated to be exactly invertible to avoid introducing error when recreating the normals in the fragment shader.
## Solution
- After attempts to get https://github.com/gltf-rs/mikktspace to integrate simple glam changes and version bumps, and releases of that crate taking weeks / not being made (no offense intended to the authors/maintainers, bevy just has its own timelines and needs to take care of) it was decided to fork that repository. The following steps were taken:
- mikktspace was forked to https://github.com/bevyengine/mikktspace in order to preserve the repository's history in case the original is ever taken down
- The README in that repo was edited to add a note stating from where the repository was forked and explaining why
- The repo was locked for changes as its only purpose is historical
- The repo was integrated into the bevy repo using `git subtree add --prefix crates/bevy_mikktspace git@github.com:bevyengine/mikktspace.git master`
- In `bevy_mikktspace`:
- The travis configuration was removed
- `cargo fmt` was run
- The `Cargo.toml` was conformed to bevy's (just adding bevy to the keywords, changing the homepage and repository, changing the version to 0.7.0-dev - importantly the license is exactly the same)
- Remove the features, remove `nalgebra` entirely, only use `glam`, suppress clippy.
- This was necessary because our CI runs clippy with `--all-features` and the `nalgebra` and `glam` features are mutually exclusive, plus I don't want to modify this highly numerically-sensitive code just to appease clippy and diverge even more from upstream.
- Rebase https://github.com/bevyengine/bevy/pull/1795
- @jakobhellermann said it was fine to copy and paste but it ended up being almost exactly the same with just a couple of adjustments when validating correctness so I decided to actually rebase it and then build on top of it.
- Use the exact same fragment shader code to ensure correct normal mapping.
- Tested with both https://github.com/KhronosGroup/glTF-Sample-Models/tree/master/2.0/NormalTangentMirrorTest which has vertex tangents and https://github.com/KhronosGroup/glTF-Sample-Models/tree/master/2.0/NormalTangentTest which requires vertex tangent generation
Co-authored-by: alteous <alteous@outlook.com>
# Objective
allow meshes with equal z-depth to be rendered in a chosen order / avoid z-fighting
## Solution
add a depth_bias to SpecializedMaterial that is added to the mesh depth used for render-ordering.
# Objective
- Add an `ExtractResourcePlugin` for convenience and consistency
## Solution
- Add an `ExtractResourcePlugin` similar to `ExtractComponentPlugin` but for ECS `Resource`s. The system that is executed simply clones the main world resource into a render world resource, if and only if the main world resource was either added or changed since the last execution of the system.
- Add an `ExtractResource` trait with a `fn extract_resource(res: &Self) -> Self` function. This is used by the `ExtractResourcePlugin` to extract the resource
- Add a derive macro for `ExtractResource` on a `Resource` with the `Clone` trait, that simply returns `res.clone()`
- Use `ExtractResourcePlugin` wherever both possible and appropriate
# Objective
- Fixes#4456
## Solution
- Removed the `near` and `far` fields from the camera and the views.
---
## Changelog
- Removed the `near` and `far` fields from the camera and the views.
- Removed the `ClusterFarZMode::CameraFarPlane` far z mode.
## Migration Guide
- Cameras no longer accept near and far values during initialization
- `ClusterFarZMode::Constant` should be used with the far value instead of `ClusterFarZMode::CameraFarPlane`
# Objective
- noticed a few Vec3 and Vec2 that could be const
## Solution
- Declared them as const
- It seems to make a tiny improvement in example `many_light`, but given that the change is not complex at all it could still be worth it
# Objective
Add support for vertex colors
## Solution
This change is modeled after how vertex tangents are handled, so the shader is conditionally compiled with vertex color support if the mesh has the corresponding attribute set.
Vertex colors are multiplied by the base color. I'm not sure if this is the best for all cases, but may be useful for modifying vertex colors without creating a new mesh.
I chose `VertexFormat::Float32x4`, but I'd prefer 16-bit floats if/when support is added.
## Changelog
### Added
- Vertex colors can be specified using the `Mesh::ATTRIBUTE_COLOR` mesh attribute.
# Objective
- When spawning a sprite the alpha is used for transparency, but when using the `Color::into()` implementation to spawn a `StandardMaterial`, the alpha is ignored.
- Pretty much everytime I want to make something transparent I started with a `Color::rgb().into()` and I'm always surprised that it doesn't work when changing it to `Color::rgba().into()`
- It's possible there's an issue with this approach I am not thinking of, but I'm not sure what's the point of setting an alpha value without the goal of making a color transparent.
## Solution
- Set the alpha_mode to AlphaMode::Blend when the alpha is not the default value.
---
## Migration Guide
This is not a breaking change, but it can easily be migrated to reduce boilerplate
```rust
commands.spawn_bundle(PbrBundle {
mesh: meshes.add(shape::Cube::default().into()),
material: materials.add(StandardMaterial {
base_color: Color::rgba(1.0, 0.0, 0.0, 0.75),
alpha_mode: AlphaMode::Blend,
..default()
}),
..default()
});
// becomes
commands.spawn_bundle(PbrBundle {
mesh: meshes.add(shape::Cube::default().into()),
material: materials.add(Color::rgba(1.0, 0.0, 0.0, 0.75).into()),
..default()
});
```
Co-authored-by: Charles <IceSentry@users.noreply.github.com>
### Problem
It currently isn't possible to construct the default value of a reflected type. Because of that, it isn't possible to use `add_component` of `ReflectComponent` to add a new component to an entity because you can't know what the initial value should be.
### Solution
1. add `ReflectDefault` type
```rust
#[derive(Clone)]
pub struct ReflectDefault {
default: fn() -> Box<dyn Reflect>,
}
impl ReflectDefault {
pub fn default(&self) -> Box<dyn Reflect> {
(self.default)()
}
}
impl<T: Reflect + Default> FromType<T> for ReflectDefault {
fn from_type() -> Self {
ReflectDefault {
default: || Box::new(T::default()),
}
}
}
```
2. add `#[reflect(Default)]` to all component types that implement `Default` and are user facing (so not `ComputedSize`, `CubemapVisibleEntities` etc.)
This makes it possible to add the default value of a component to an entity without any compile-time information:
```rust
fn main() {
let mut app = App::new();
app.register_type::<Camera>();
let type_registry = app.world.get_resource::<TypeRegistry>().unwrap();
let type_registry = type_registry.read();
let camera_registration = type_registry.get(std::any::TypeId::of::<Camera>()).unwrap();
let reflect_default = camera_registration.data::<ReflectDefault>().unwrap();
let reflect_component = camera_registration
.data::<ReflectComponent>()
.unwrap()
.clone();
let default = reflect_default.default();
drop(type_registry);
let entity = app.world.spawn().id();
reflect_component.add_component(&mut app.world, entity, &*default);
let camera = app.world.entity(entity).get::<Camera>().unwrap();
dbg!(&camera);
}
```
### Open questions
- should we have `ReflectDefault` or `ReflectFromWorld` or both?
# Objective
- While optimising many_cubes, I noticed that all material handles are extracted regardless of whether the entity to which the handle belongs is visible or not. As such >100k handles are extracted when only <20k are visible.
## Solution
- Only extract material handles of visible entities.
- This improves `many_cubes -- sphere` from ~42fps to ~48fps. It reduces not only the extraction time but also system commands time. `Handle<StandardMaterial>` extraction and its system commands went from 0.522ms + 3.710ms respectively, to 0.267ms + 0.227ms an 88% reduction for this system for this case. It's very view dependent but...
# Objective
- Creating and executing render passes has GPU overhead. If there are no phase items in the render phase to draw, then this overhead should not be incurred as it has no benefit.
## Solution
- Check if there are no phase items to draw, and if not, do not construct not execute the render pass
---
## Changelog
- Changed: Do not create nor execute empty render passes
# Objective
- Meshes are queued in opaque phase instead of transparent phase when drawing wireframes.
- There is a name mismatch.
## Solution
- Rename `transparent_phase` to `opaque_phase` in `wireframe.rs`.
# Objective
Reduce the catch-all grab-bag of functionality in bevy_core by moving FloatOrd to bevy_utils.
A step in addressing #2931 and splitting bevy_core into more specific locations.
## Solution
Move FloatOrd into bevy_utils. Fix the compile errors.
As a result, bevy_core_pipeline, bevy_pbr, bevy_sprite, bevy_text, and bevy_ui no longer depend on bevy_core (they were only using it for `FloatOrd` previously).
# Objective
- Related #4276.
- Part of the splitting process of #3503.
## Solution
- Move `Size` to `bevy_ui`.
## Reasons
- `Size` is only needed in `bevy_ui` (because it needs to use `Val` instead of `f32`), but it's also used as a worse `Vec2` replacement in other areas.
- `Vec2` is more powerful than `Size` so it should be used whenever possible.
- Discussion in #3503.
## Changelog
### Changed
- The `Size` type got moved from `bevy_math` to `bevy_ui`.
## Migration Guide
- The `Size` type got moved from `bevy::math` to `bevy::ui`. To migrate you just have to import `bevy::ui::Size` instead of `bevy::math::Math` or use the `bevy::prelude` instead.
Co-authored-by: KDecay <KDecayMusic@protonmail.com>
# Objective
- Fix `ClusterConfig::None`
- This fix is from @robtfm but they didn't have time to submit it, so I am.
## Solution
- Always clear clusters and skip processing when `ClusterConfig::None`
- Conditionally remove `VisiblePointLights` from the view if it is present
# Objective
- https://github.com/bevyengine/bevy/pull/4098 still hasn't fixed minimisation on Windows.
- `Clusters.lights` is assumed to have the number of items given by the product of `Clusters.dimensions`'s axes.
## Solution
- Make that true in `clear`.
# Objective
- Fixes#4234
- Fixes#4473
- Built on top of #3989
- Improve performance of `assign_lights_to_clusters`
## Solution
- Remove the OBB-based cluster light assignment algorithm and calculation of view space AABBs
- Implement the 'iterative sphere refinement' algorithm used in Just Cause 3 by Emil Persson as documented in the Siggraph 2015 Practical Clustered Shading talk by Persson, on pages 42-44 http://newq.net/dl/pub/s2015_practical.pdf
- Adapt to also support orthographic projections
- Add `many_lights -- orthographic` for testing many lights using an orthographic projection
## Results
- `assign_lights_to_clusters` in `many_lights` before this PR on an M1 Max over 1500 frames had a median execution time of 1.71ms. With this PR it is 1.51ms, a reduction of 0.2ms or 11.7% for this system.
---
## Changelog
- Changed: Improved cluster light assignment performance
Co-authored-by: robtfm <50659922+robtfm@users.noreply.github.com>
Co-authored-by: Carter Anderson <mcanders1@gmail.com>
# Objective
- While animating 501 https://github.com/KhronosGroup/glTF-Sample-Models/tree/master/2.0/BrainStem, I noticed things were getting a little slow
- Looking in tracy, the system `extract_skinned_meshes` is taking a lot of time, with a mean duration of 15.17ms
## Solution
- ~~Use `Vec` instead of a `SmallVec`~~
- ~~Don't use an temporary variable~~
- Compute the affine matrix as an `Affine3A` instead
- Remove the `temp` vec
| |mean|
|---|---|
|base|15.17ms|
|~~vec~~|~~9.31ms~~|
|~~no temp variable~~|~~11.31ms~~|
|removing the temp vector|8.43ms|
|affine|13.21ms|
|all together|7.23ms|
# Objective
- Make use of storage buffers, where they are available, for clustered forward bindings to support far more point lights in a scene
- Fixes#3605
- Based on top of #4079
This branch on an M1 Max can keep 60fps with about 2150 point lights of radius 1m in the Sponza scene where I've been testing. The bottleneck is mostly assigning lights to clusters which grows faster than linearly (I think 1000 lights was about 1.5ms and 5000 was 7.5ms). I have seen papers and presentations leveraging compute shaders that can get this up to over 1 million. That said, I think any further optimisations should probably be done in a separate PR.
## Solution
- Add `RenderDevice` to the `Material` and `SpecializedMaterial` trait `::key()` functions to allow setting flags on the keys depending on feature/limit availability
- Make `GpuPointLights` and `ViewClusterBuffers` into enums containing `UniformVec` and `StorageBuffer` variants. Implement the necessary API on them to make usage the same for both cases, and the only difference is at initialisation time.
- Appropriate shader defs in the shader code to handle the two cases
## Context on some decisions / open questions
- I'm using `max_storage_buffers_per_shader_stage >= 3` as a check to see if storage buffers are supported. I was thinking about diving into 'binding resource management' but it feels like we don't have enough use cases to understand the problem yet, and it is mostly a separate concern to this PR, so I think it should be handled separately.
- Should `ViewClusterBuffers` and `ViewClusterBindings` be merged, duplicating the count variables into the enum variants?
Co-authored-by: Carter Anderson <mcanders1@gmail.com>
# Objective
Animation with shadows crashes with:
```
thread 'main' panicked at 'wgpu error: Validation Error
Caused by:
In Device::create_render_pipeline
note: label = `shadow_pipeline`
error matching VERTEX shader requirements against the pipeline
shader global ResourceBinding { group: 1, binding: 1 } is not available in the layout pipeline layout
visibility flags don't include the shader stage
```
Animation with wireframe crashes with:
```
thread 'main' panicked at 'wgpu error: Validation Error
Caused by:
In Device::create_render_pipeline
note: label = `opaque_mesh_pipeline`
error matching VERTEX shader requirements against the pipeline
shader global ResourceBinding { group: 2, binding: 0 } is not available in the layout pipeline layout
binding is missing from the pipeline layout
```
## Solution
- Fix the bindings
# Objective
Add a system parameter `ParamSet` to be used as container for conflicting parameters.
## Solution
Added two methods to the SystemParamState trait, which gives the access used by the parameter. Did the implementation. Added some convenience methods to FilteredAccessSet. Changed `get_conflicts` to return every conflicting component instead of breaking on the first conflicting `FilteredAccess`.
Co-authored-by: bilsen <40690317+bilsen@users.noreply.github.com>
# Objective
Load skeletal weights and indices from GLTF files. Animate meshes.
## Solution
- Load skeletal weights and indices from GLTF files.
- Added `SkinnedMesh` component and ` SkinnedMeshInverseBindPose` asset
- Added `extract_skinned_meshes` to extract joint matrices.
- Added queue phase systems for enqueuing the buffer writes.
Some notes:
- This ports part of # #2359 to the current main.
- This generates new `BufferVec`s and bind groups every frame. The expectation here is that the number of `Query::get` calls during extract is probably going to be the stronger bottleneck, with up to 256 calls per skinned mesh. Until that is optimized, caching buffers and bind groups is probably a non-concern.
- Unfortunately, due to the uniform size requirements, this means a 16KB buffer is allocated for every skinned mesh every frame. There's probably a few ways to get around this, but most of them require either compute shaders or storage buffers, which are both incompatible with WebGL2.
Co-authored-by: james7132 <contact@jamessliu.com>
Co-authored-by: François <mockersf@gmail.com>
Co-authored-by: James Liu <contact@jamessliu.com>
* Refactor assign_lights_to_clusters to always clear + update clusters, even if the screen size isn't available yet / is zero. This fixes#4167. We still avoid the "expensive" per-light work when the screen size isn't available yet. I also consolidated some logic to eliminate some redundancies.
* Removed _a ton_ of (potentially very large) per-frame reallocations
* Removed `Res<VisiblePointLights>` (a vec) in favor of `Res<GlobalVisiblePointLights>` (a hashmap). We were allocating a new hashmap every frame, the collecting it into a vec every frame, then in another system _re-generating the hashmap_. It is always used like a hashmap, might as well embrace that. We now reuse the same hashmap every frame and dont use any intermediate collections.
* We were re-allocating Clusters aabb and light vectors every frame by re-constructing Clusters every frame. We now re-use the existing collections.
* Reuse per-camera VisiblePointLight vecs when possible instead of allocating them every frame. We now only insert VisiblePointLights if the component doesn't exist yet.
# Objective
- Fixes#3970
- To support Bevy's shader abstraction(shader defs, shader imports and hot shader reloading) for compute shaders, I have followed carts advice and change the `PipelinenCache` to accommodate both compute and render pipelines.
## Solution
- renamed `RenderPipelineCache` to `PipelineCache`
- Cached Pipelines are now represented by an enum (render, compute)
- split the `SpecializedPipelines` into `SpecializedRenderPipelines` and `SpecializedComputePipelines`
- updated the game of life example
## Open Questions
- should `SpecializedRenderPipelines` and `SpecializedComputePipelines` be merged and how would we do that?
- should the `get_render_pipeline` and `get_compute_pipeline` methods be merged?
- is pipeline specialization for different entry points a good pattern
Co-authored-by: Kurt Kühnert <51823519+Ku95@users.noreply.github.com>
Co-authored-by: Carter Anderson <mcanders1@gmail.com>
# Objective
- Reduce time spent in the `check_visibility` system
## Solution
- Use `Vec3A` for all bounding volume types to leverage SIMD optimisations and to avoid repeated runtime conversions from `Vec3` to `Vec3A`
- Inline all bounding volume intersection methods
- Add on-the-fly calculated `Aabb` -> `Sphere` and do `Sphere`-`Frustum` intersection tests before `Aabb`-`Frustum` tests. This is faster for `many_cubes` but could be slower in other cases where the sphere test gives a false-positive that the `Aabb` test discards. Also, I tested precalculating the `Sphere`s and inserting them alongside the `Aabb` but this was slower.
- Do not test meshes against the far plane. Apparently games don't do this anymore with infinite projections, and it's one fewer plane to test against. I made it optional and still do the test for culling lights but that is up for discussion.
- These collectively reduce `check_visibility` execution time in `many_cubes -- sphere` from 2.76ms to 1.48ms and increase frame rate from ~42fps to ~44fps
# Objective
- Support compressed textures including 'universal' formats (ETC1S, UASTC) and transcoding of them to
- Support `.dds`, `.ktx2`, and `.basis` files
## Solution
- Fixes https://github.com/bevyengine/bevy/issues/3608 Look there for more details.
- Note that the functionality is all enabled through non-default features. If it is desirable to enable some by default, I can do that.
- The `basis-universal` crate, used for `.basis` file support and for transcoding, is built on bindings against a C++ library. It's not feasible to rewrite in Rust in a short amount of time. There are no Rust alternatives of which I am aware and it's specialised code. In its current state it doesn't support the wasm target, but I don't know for sure. However, it is possible to build the upstream C++ library with emscripten, so there is perhaps a way to add support for web too with some shenanigans.
- There's no support for transcoding from BasisLZ/ETC1S in KTX2 files as it was quite non-trivial to implement and didn't feel important given people could use `.basis` files for ETC1S.
# Objective
fix cluster tilesize and tilecount calculations.
Fixes https://github.com/bevyengine/bevy/issues/4127 & https://github.com/bevyengine/bevy/issues/3596
## Solution
- calculate tilesize as smallest integers such that dimensions.xy() tiles will cover the screen
- calculate final dimensions as smallest integers such that final dimensions * tilesize will cover the screen
there is more cleanup that could be done in these functions. a future PR will likely remove the tilesize completely, so this is just a minimal change set to fix the current bug at small screen sizes
Co-authored-by: Carter Anderson <mcanders1@gmail.com>
# Objective
provide some customisation for default cluster setup
avoid "cluster index lists is full" in all cases (using a strategy outlined by @superdump)
## Solution
Add ClusterConfig enum (which can be inserted into a view at any time) to allow specifying cluster setup with variants:
- None (do not do any light assignment - for views which do not require light info, e.g. minimaps etc)
- Single (one cluster)
- XYZ (explicit cluster counts in each dimension)
- FixedZ (most similar to current - specify Z-slices and total, then x and y counts are dynamically determined to give approximately square clusters based on current aspect ratio)
Defaults to FixedZ { total: 4096, z: 24 } which is similar to the current setup.
Per frame, estimate the number of indices that would be required for the current config and decrease the cluster counts / increase the cluster sizes in the x and y dimensions if the index list would be too small.
notes:
- I didn't put ClusterConfig in the camera bundles to avoid introducing a dependency from bevy_render to bevy_pbr. the ClusterConfig enum comes with a pbr-centric impl block so i didn't want to move that into bevy_render either.
- ~Might want to add None variant to cluster config for views that don't care about lights?~
- Not well tested for orthographic
- ~there's a cluster_muck branch on my repo which includes some diagnostics / a modified lighting example which may be useful for tyre-kicking~ (outdated, i will bring it up to date if required)
anecdotal timings:
FPS on the lighting demo is negligibly better (~5%), maybe due to a small optimisation constraining the light aabb to be in front of the camera
FPS on the lighting demo with 100 extra lights added is ~33% faster, and also renders correctly as the cluster index count is no longer exceeded
## Objective
Currently, all directional and point lights have their viewing frusta recalculated every frame, even if they have not moved or been disabled/enabled.
## Solution
The relevant systems now make use of change detection to only update those lights whose viewing frusta may have changed.
This makes it possible for materials to configure front or
back face culling, or disable culling.
Initially I looked at specializing the Mesh which currently
controls this state but conceptually it seems more appropriate
to control this at the material level, not the mesh level.
_Just for reference this also seems to be consistent with Unity
where materials/shaders can configure the culling mode between
front/back/off - as opposed to configuring any culling state
when importing a mesh._
After some archaeology, trying to understand how this might
relate to the existing 'double_sided' option, it was determined
that double_sided is a more high level lighting option originally
from Filament that will cause the normals for back faces to be
flipped.
For sake of avoiding complexity, but keeping control this
currently keeps the options orthogonal, and adds some clarifying
documentation for `double_sided`. This won't affect any existing
apps since there hasn't been a way to disable backface culling
up until now, so the option was essentially redundant.
double_sided support could potentially be updated to imply
disabling of backface culling.
For reference https://github.com/bevyengine/bevy/pull/3734/commits also looks at exposing cull mode control. I think the main difference here is that this patch handles RenderPipelineDescriptor specialization directly within the StandardMaterial implementation instead of communicating info back to the Mesh via the `queue_material_meshes` system.
With the way material.rs builds up the final RenderPipelineDescriptor first by calling specialize for the MeshPipeline followed by specialize for the material then it seems like we have a natural place to override anything in the descriptor that's first configured for the mesh state.
# Objective
Add Visibility for lights
## Solution
- add Visibility to PointLightBundle and DirectionLightBundle
- filter lights used by Visibility.is_visible
note: includes changes from #3916 due to overlap, will be cleaner after that is merged
# Objective
fix#3915
## Solution
the issues are caused by
- lights are assigned to clusters before being filtered down to MAX_POINT_LIGHTS, leading to cluster counts potentially being too high
- after fixing the above, packing the count into 8 bits still causes overflow with exactly 256 lights affecting a cluster
to fix:
```assign_lights_to_clusters```
- limit extracted lights to MAX_POINT_LIGHTS, selecting based on shadow-caster & intensity (if required)
- warn if MAX_POINT_LIGHT count is exceeded
```prepare_lights```
- limit the lights assigned to a cluster to CLUSTER_COUNT_MASK (which is 1 less than MAX_POINT_LIGHTS) to avoid overflowing into the offset bits
notes:
- a better solution to the overflow may be to use more than 8 bits for cluster_count (the comment states only 14 of the remaining 24 bits are used for the offset). this would touch more of the code base but i'm happy to try if it has some benefit.
- intensity is only one way to select lights. it may be worth allowing user configuration of the light filtering, but i can't see a clean way to do that
# Objective
- Optimize assign_lights_to_clusters
## Solution
- Avoid inserting entities into hash sets in inner loops when it is known they will be inserted in at least one iteration of the loop.
- Use a Vec instead of a hash set where the set is not needed
- Avoid explicit calculation of the cluster_index from x,y,z coordinates, instead using row and column offsets and just adding z in the inner loop
- These changes cut the time spent in the system roughly in half
# Objective
- In the large majority of cases, users were calling `.unwrap()` immediately after `.get_resource`.
- Attempting to add more helpful error messages here resulted in endless manual boilerplate (see #3899 and the linked PRs).
## Solution
- Add an infallible variant named `.resource` and so on.
- Use these infallible variants over `.get_resource().unwrap()` across the code base.
## Notes
I did not provide equivalent methods on `WorldCell`, in favor of removing it entirely in #3939.
## Migration Guide
Infallible variants of `.get_resource` have been added that implicitly panic, rather than needing to be unwrapped.
Replace `world.get_resource::<Foo>().unwrap()` with `world.resource::<Foo>()`.
## Impact
- `.unwrap` search results before: 1084
- `.unwrap` search results after: 942
- internal `unwrap_or_else` calls added: 4
- trivial unwrap calls removed from tests and code: 146
- uses of the new `try_get_resource` API: 11
- percentage of the time the unwrapping API was used internally: 93%
# Objective
Will fix#3377 and #3254
## Solution
Use an enum to represent either a `WindowId` or `Handle<Image>` in place of `Camera::window`.
Co-authored-by: Carter Anderson <mcanders1@gmail.com>
This PR makes a number of changes to how meshes and vertex attributes are handled, which the goal of enabling easy and flexible custom vertex attributes:
* Reworks the `Mesh` type to use the newly added `VertexAttribute` internally
* `VertexAttribute` defines the name, a unique `VertexAttributeId`, and a `VertexFormat`
* `VertexAttributeId` is used to produce consistent sort orders for vertex buffer generation, replacing the more expensive and often surprising "name based sorting"
* Meshes can be used to generate a `MeshVertexBufferLayout`, which defines the layout of the gpu buffer produced by the mesh. `MeshVertexBufferLayouts` can then be used to generate actual `VertexBufferLayouts` according to the requirements of a specific pipeline. This decoupling of "mesh layout" vs "pipeline vertex buffer layout" is what enables custom attributes. We don't need to standardize _mesh layouts_ or contort meshes to meet the needs of a specific pipeline. As long as the mesh has what the pipeline needs, it will work transparently.
* Mesh-based pipelines now specialize on `&MeshVertexBufferLayout` via the new `SpecializedMeshPipeline` trait (which behaves like `SpecializedPipeline`, but adds `&MeshVertexBufferLayout`). The integrity of the pipeline cache is maintained because the `MeshVertexBufferLayout` is treated as part of the key (which is fully abstracted from implementers of the trait ... no need to add any additional info to the specialization key).
* Hashing `MeshVertexBufferLayout` is too expensive to do for every entity, every frame. To make this scalable, I added a generalized "pre-hashing" solution to `bevy_utils`: `Hashed<T>` keys and `PreHashMap<K, V>` (which uses `Hashed<T>` internally) . Why didn't I just do the quick and dirty in-place "pre-compute hash and use that u64 as a key in a hashmap" that we've done in the past? Because its wrong! Hashes by themselves aren't enough because two different values can produce the same hash. Re-hashing a hash is even worse! I decided to build a generalized solution because this pattern has come up in the past and we've chosen to do the wrong thing. Now we can do the right thing! This did unfortunately require pulling in `hashbrown` and using that in `bevy_utils`, because avoiding re-hashes requires the `raw_entry_mut` api, which isn't stabilized yet (and may never be ... `entry_ref` has favor now, but also isn't available yet). If std's HashMap ever provides the tools we need, we can move back to that. Note that adding `hashbrown` doesn't increase our dependency count because it was already in our tree. I will probably break these changes out into their own PR.
* Specializing on `MeshVertexBufferLayout` has one non-obvious behavior: it can produce identical pipelines for two different MeshVertexBufferLayouts. To optimize the number of active pipelines / reduce re-binds while drawing, I de-duplicate pipelines post-specialization using the final `VertexBufferLayout` as the key. For example, consider a pipeline that needs the layout `(position, normal)` and is specialized using two meshes: `(position, normal, uv)` and `(position, normal, other_vec2)`. If both of these meshes result in `(position, normal)` specializations, we can use the same pipeline! Now we do. Cool!
To briefly illustrate, this is what the relevant section of `MeshPipeline`'s specialization code looks like now:
```rust
impl SpecializedMeshPipeline for MeshPipeline {
type Key = MeshPipelineKey;
fn specialize(
&self,
key: Self::Key,
layout: &MeshVertexBufferLayout,
) -> RenderPipelineDescriptor {
let mut vertex_attributes = vec![
Mesh::ATTRIBUTE_POSITION.at_shader_location(0),
Mesh::ATTRIBUTE_NORMAL.at_shader_location(1),
Mesh::ATTRIBUTE_UV_0.at_shader_location(2),
];
let mut shader_defs = Vec::new();
if layout.contains(Mesh::ATTRIBUTE_TANGENT) {
shader_defs.push(String::from("VERTEX_TANGENTS"));
vertex_attributes.push(Mesh::ATTRIBUTE_TANGENT.at_shader_location(3));
}
let vertex_buffer_layout = layout
.get_layout(&vertex_attributes)
.expect("Mesh is missing a vertex attribute");
```
Notice that this is _much_ simpler than it was before. And now any mesh with any layout can be used with this pipeline, provided it has vertex postions, normals, and uvs. We even got to remove `HAS_TANGENTS` from MeshPipelineKey and `has_tangents` from `GpuMesh`, because that information is redundant with `MeshVertexBufferLayout`.
This is still a draft because I still need to:
* Add more docs
* Experiment with adding error handling to mesh pipeline specialization (which would print errors at runtime when a mesh is missing a vertex attribute required by a pipeline). If it doesn't tank perf, we'll keep it.
* Consider breaking out the PreHash / hashbrown changes into a separate PR.
* Add an example illustrating this change
* Verify that the "mesh-specialized pipeline de-duplication code" works properly
Please dont yell at me for not doing these things yet :) Just trying to get this in peoples' hands asap.
Alternative to #3120Fixes#3030
Co-authored-by: Carter Anderson <mcanders1@gmail.com>
Adds "hot reloading" of internal assets, which is normally not possible because they are loaded using `include_str` / direct Asset collection access.
This is accomplished via the following:
* Add a new `debug_asset_server` feature flag
* When that feature flag is enabled, create a second App with a second AssetServer that points to a configured location (by default the `crates` folder). Plugins that want to add hot reloading support for their assets can call the new `app.add_debug_asset::<T>()` and `app.init_debug_asset_loader::<T>()` functions.
* Load "internal" assets using the new `load_internal_asset` macro. By default this is identical to the current "include_str + register in asset collection" approach. But if the `debug_asset_server` feature flag is enabled, it will also load the asset dynamically in the debug asset server using the file path. It will then set up a correlation between the "debug asset" and the "actual asset" by listening for asset change events.
This is an alternative to #3673. The goal was to keep the boilerplate and features flags to a minimum for bevy plugin authors, and allow them to home their shaders near relevant code.
This is a draft because I haven't done _any_ quality control on this yet. I'll probably rename things and remove a bunch of unwraps. I just got it working and wanted to use it to start a conversation.
Fixes#3660
This enables shaders to (optionally) define their import path inside their source. This has a number of benefits:
1. enables users to define their own custom paths directly in their assets
2. moves the import path "close" to the asset instead of centralized in the plugin definition, which seems "better" to me.
3. makes "internal hot shader reloading" way more reasonable (see #3966)
4. logically opens the door to importing "parts" of a shader by defining "import_path blocks".
```rust
#define_import_path bevy_pbr::mesh_struct
struct Mesh {
model: mat4x4<f32>;
inverse_transpose_model: mat4x4<f32>;
// 'flags' is a bit field indicating various options. u32 is 32 bits so we have up to 32 options.
flags: u32;
};
let MESH_FLAGS_SHADOW_RECEIVER_BIT: u32 = 1u;
```
# Objective
- `WgpuOptions` is mutated to be updated with the actual device limits and features, but this information is readily available to both the main and render worlds through the `RenderDevice` which has .limits() and .features() methods
- Information about the adapter in terms of its name, the backend in use, etc were not being exposed but have clear use cases for being used to take decisions about what rendering code to use. For example, if something works well on AMD GPUs but poorly on Intel GPUs. Or perhaps something works well in Vulkan but poorly in DX12.
## Solution
- Stop mutating `WgpuOptions `and don't insert the updated values into the main and render worlds
- Return `AdapterInfo` from `initialize_renderer` and insert it into the main and render worlds
- Use `RenderDevice` limits in the lighting code that was using `WgpuOptions.limits`.
- Renamed `WgpuOptions` to `WgpuSettings`
What is says on the tin.
This has got more to do with making `clippy` slightly more *quiet* than it does with changing anything that might greatly impact readability or performance.
that said, deriving `Default` for a couple of structs is a nice easy win
(cherry picked from commit de943381bd2a8b242c94db99e6c7bbd70006d7c3)
# Objective
The view uniform lacks view transform information. The inverse transform is currently provided but this is not sufficient if you do not have access to an `inverse` function (such as in WGSL).
## Solution
Grab the view transform, put it in the view uniform, use the same matrix to compute the inverse as well.
# Objective
The query for `VisiblePointLights` in `check_light_mesh_visibility` has a `Without<DirectionalLight>` filter. However, because `VisiblePointLights` is no longer an alias for `VisibleEntities`, the query won't conflict with the query for `DirectionalLight`s and thus the filter is unnecessary.
## Solution
Remove the filter and the outdated comment explaining its purpose.
# Objective
- Do not panic when mroe than 256 point lights are added the scene
- Fixes https://github.com/bevyengine/bevy/issues/3682
## Solution
- Only iterate the first `MAX_POINT_LIGHTS` lights instead of as many as there are
## Open questions
- Should we warn that there are more than the maximum allowed number of point lights in the scene?
# Objective
In this PR I added the ability to opt-out graphical backends. Closes#3155.
## Solution
I turned backends into `Option` ~~and removed panicking sub app API to force users handle the error (was suggested by `@cart`)~~.
# Objective
The current 2d rendering is specialized to render sprites, we need a generic way to render 2d items, using meshes and materials like we have for 3d.
## Solution
I cloned a good part of `bevy_pbr` into `bevy_sprite/src/mesh2d`, removed lighting and pbr itself, adapted it to 2d rendering, added a `ColorMaterial`, and modified the sprite rendering to break batches around 2d meshes.
~~The PR is a bit crude; I tried to change as little as I could in both the parts copied from 3d and the current sprite rendering to make reviewing easier. In the future, I expect we could make the sprite rendering a normal 2d material, cleanly integrated with the rest.~~ _edit: see <https://github.com/bevyengine/bevy/pull/3460#issuecomment-1003605194>_
## Remaining work
- ~~don't require mesh normals~~ _out of scope_
- ~~add an example~~ _done_
- support 2d meshes & materials in the UI?
- bikeshed names (I didn't think hard about naming, please check if it's fine)
## Remaining questions
- ~~should we add a depth buffer to 2d now that there are 2d meshes?~~ _let's revisit that when we have an opaque render phase_
- ~~should we add MSAA support to the sprites, or remove it from the 2d meshes?~~ _I added MSAA to sprites since it's really needed for 2d meshes_
- ~~how to customize vertex attributes?~~ _#3120_
Co-authored-by: Carter Anderson <mcanders1@gmail.com>
# Objective
- Using plain exponential depth slicing for perspective projection cameras results in unnecessarily many slices very close together close to the camera. If the camera is then moved close to a collection of point lights, they will likely exhaust the available uniform buffer space for the lists of which lights affect which clusters.
## Solution
- A simple solution to this is to use a different near plane value for the depth slicing and set it to where the first slice's far plane should be. The default value is 5 and works well. This results in the configured number of depth slices, maintains the exponential slicing beyond the initial slice, and no slices are too small such that they cause problems that are sensitive to the view position.
# Objective
- Allow the user to specify the priority when configuring wgpu features/limits and by default use the maximum capabilities of the chosen adapter.
## Solution
- Add a `WgpuOptionsPriority` enum with `Compatibility`, `Functionality` and `WebGL2` options.
- Add a `priority: WgpuOptionsPriority` member to `WgpuOptions`.
- When initialising the renderer, if `WgpuOptions::priority == WgpuOptionsPriority::Functionality`, query the adapter for the available features and limits, use them when creating a device, and update `WgpuOptions` with those values. If `Compatibility` use the behaviour as before this PR. If `WebGL2` then use the WebGL2 downlevel limits as used when when building for wasm, for convenience of testing WebGL2 limits without having to build for wasm.
- Add an environment variable `WGPU_OPTIONS_PRIO` that takes `compatibility`, `functionality`, `webgl2`.
- Default to `WgpuOptionsPriority::Functionality`.
- Insert updated `WgpuOptions` into render app world as well. This is useful for applying the limits when rendering, such as limiting the directional light shadow map texture to 2048x2048 when using WebGL2 downlevel limits but not on wasm.
- Reduced `draw_state` logs from `debug` to `trace` and added `debug` level logs for the wgpu features and limits. Use `RUST_LOG=bevy_render=debug` to see the output.
# Objective
- Add support for loading lights from glTF 2.0 files
## Solution
- This adds support for the KHR_punctual_lights extension which supports point, directional, and spot lights, though we don't yet support spot lights.
- Inserting light bundles when creating scenes required registering some more light bundle component types.
# Objective
- After updating #2988, all the examples started crashing with
```
thread 'main' panicked at 'Plugin "bevy_render::render_component::ExtractComponentPlugin<bevy_asset::handle::Handle<bevy_pbr::pbr_material::StandardMaterial>>" was already added', crates/bevy_app/src/app.rs:831:33
```
## Solution
Plugin was added twice:
directly:
1d0d8a3397/crates/bevy_pbr/src/lib.rs (L73)
and through `MaterialPlugin`:
1d0d8a3397/crates/bevy_pbr/src/lib.rs (L72)1d0d8a3397/crates/bevy_pbr/src/material.rs (L183)
I removed the extra plugin
Co-authored-by: François <8672791+mockersf@users.noreply.github.com>
#3457 adds the `doc_markdown` clippy lint, which checks doc comments to make sure code identifiers are escaped with backticks. This causes a lot of lint errors, so this is one of a number of PR's that will fix those lint errors one crate at a time.
This PR fixes lints in the `bevy_pbr` crate.
# Objective
- While reading code, found some queries that are `mut` and not used as such
## Solution
- Remove `mut` when possible
Co-authored-by: François <8672791+mockersf@users.noreply.github.com>
This adds "high level" `Material` and `SpecializedMaterial` traits, which can be used with a `MaterialPlugin<T: SpecializedMaterial>`. `MaterialPlugin` automatically registers the appropriate resources, draw functions, and queue systems. The `Material` trait is simpler, and should cover most use cases. `SpecializedMaterial` is like `Material`, but it also requires defining a "specialization key" (see #3031). `Material` has a trivial blanket impl of `SpecializedMaterial`, which allows us to use the same types + functions for both.
This makes defining custom 3d materials much simpler (see the `shader_material` example diff) and ensures consistent behavior across all 3d materials (both built in and custom). I ported the built in `StandardMaterial` to `MaterialPlugin`. There is also a new `MaterialMeshBundle<T: SpecializedMaterial>`, which `PbrBundle` aliases to.
# Objective
- Our crevice is still called "crevice", which we can't use for a release
- Users would need to use our "crevice" directly to be able to use the derive macro
## Solution
- Rename crevice to bevy_crevice, and crevice-derive to bevy-crevice-derive
- Re-export it from bevy_render, and use it from bevy_render everywhere
- Fix derive macro to work either from bevy_render, from bevy_crevice, or from bevy
## Remaining
- It is currently re-exported as `bevy::render::bevy_crevice`, is it the path we want?
- After a brief suggestion to Cart, I changed the version to follow Bevy version instead of crevice, do we want that?
- Crevice README.md need to be updated
- in the `Cargo.toml`, there are a few things to change. How do we want to change them? How do we keep attributions to original Crevice?
```
authors = ["Lucien Greathouse <me@lpghatguy.com>"]
documentation = "https://docs.rs/crevice"
homepage = "https://github.com/LPGhatguy/crevice"
repository = "https://github.com/LPGhatguy/crevice"
```
Co-authored-by: François <8672791+mockersf@users.noreply.github.com>
Co-authored-by: Carter Anderson <mcanders1@gmail.com>
# Objective
Instead of panicking when the `indices` field of a mesh is `None`, actually manage it.
This is just a question of keeping track of the vertex buffer size.
## Notes
* Relying on this change to improve performance on [bevy_debug_lines using the new renderer](https://github.com/Toqozz/bevy_debug_lines/pull/10)
* I'm still new to rendering, my only expertise with wgpu is the learn-wgpu tutorial, likely I'm overlooking something.
# Objective
- 3d examples fail to run in webgl2 because of unsupported texture formats or texture too large
## Solution
- switch to supported formats if a feature is enabled. I choose a feature instead of a build target to not conflict with a potential webgpu support
Very inspired by 6813b2edc5, and need #3290 to work.
I named the feature `webgl2`, but it's only needed if one want to use PBR in webgl2. Examples using only 2D already work.
Co-authored-by: François <8672791+mockersf@users.noreply.github.com>
# Objective
Fixes: #3368
Issue was caused by screen size being: `(0, 0)`.
## Solution
Don't update clusters if the screen size is zero. A better solution might be to not render when minimized, but this works in the meantime.
Co-authored-by: John <startoaster23@gmail.com>
# Objective
Fixes#3379
## Solution
The custom mesh pipelines needed to be specialized on each mesh's primitive topology, as done in `queue_meshes()`
Co-authored-by: Carter Anderson <mcanders1@gmail.com>
# Objective
Fixes#3352Fixes#3208
## Solution
- Update wgpu to 0.12
- Update naga to 0.8
- Resolve compilation errors
- Remove [[block]] from WGSL shaders (because it is depracated and now wgpu cant parse it)
- Replace `elseif` with `else if` in pbr.wgsl
# Objective
This PR fixes a crash when winit is enabled when there is a camera in the world. Part of #3155
## Solution
In this PR, I removed two unwraps and added an example for regression testing.
# Objective
PBR lighting was broken in the new renderer when using orthographic projections due to the way the depth slicing works for the clusters. Fix it.
## Solution
- The default orthographic projection near plane is 0.0. The perspective projection depth slicing does a division by the near plane which gives a floating point NaN and the clustering all breaks down.
- Orthographic projections have a linear depth mapping, so it made intuitive sense to me to do depth slicing with a linear mapping too. The alternative I saw was to try to handle the near plane being at 0.0 and using the exponential depth slicing, but that felt like a hack that didn't make sense.
- As such, I have added code that detects whether the projection is orthographic based on `projection[3][3] == 1.0` and then implemented the orthographic mapping case throughout (when computing cluster AABBs, and when mapping a view space position (or light) to a cluster id in both the rust and shader code).
## Screenshots
Before:
![before](https://user-images.githubusercontent.com/302146/145847278-5b1bca74-fbad-4cc5-8b49-384f6a377fdc.png)
After:
<img width="1392" alt="Screenshot 2021-12-13 at 16 36 53" src="https://user-images.githubusercontent.com/302146/145847314-6f3a2035-5d87-4896-8032-0c3e35e15b7d.png">
Old renderer (slightly lighter due to slight difference in configured intensity):
<img width="1392" alt="Screenshot 2021-12-13 at 16 42 23" src="https://user-images.githubusercontent.com/302146/145847391-6a5e6fe0-22da-4fc1-a6c7-440543689a63.png">
This makes the [New Bevy Renderer](#2535) the default (and only) renderer. The new renderer isn't _quite_ ready for the final release yet, but I want as many people as possible to start testing it so we can identify bugs and address feedback prior to release.
The examples are all ported over and operational with a few exceptions:
* I removed a good portion of the examples in the `shader` folder. We still have some work to do in order to make these examples possible / ergonomic / worthwhile: #3120 and "high level shader material plugins" are the big ones. This is a temporary measure.
* Temporarily removed the multiple_windows example: doing this properly in the new renderer will require the upcoming "render targets" changes. Same goes for the render_to_texture example.
* Removed z_sort_debug: entity visibility sort info is no longer available in app logic. we could do this on the "render app" side, but i dont consider it a priority.
Objective
During work on #3009 I've found that not all jobs use actions-rs, and therefore, an previous version of Rust is used for them. So while compilation and other stuff can pass, checking markup and Android build may fail with compilation errors.
Solution
This PR adds `action-rs` for any job running cargo, and updates the edition to 2021.
This implements the most minimal variant of #1843 - a derive for marker trait. This is a prerequisite to more complicated features like statically defined storage type or opt-out component reflection.
In order to make component struct's purpose explicit and avoid misuse, it must be annotated with `#[derive(Component)]` (manual impl is discouraged for compatibility). Right now this is just a marker trait, but in the future it might be expanded. Making this change early allows us to make further changes later without breaking backward compatibility for derive macro users.
This already prevents a lot of issues, like using bundles in `insert` calls. Primitive types are no longer valid components as well. This can be easily worked around by adding newtype wrappers and deriving `Component` for them.
One funny example of prevented bad code (from our own tests) is when an newtype struct or enum variant is used. Previously, it was possible to write `insert(Newtype)` instead of `insert(Newtype(value))`. That code compiled, because function pointers (in this case newtype struct constructor) implement `Send + Sync + 'static`, so we allowed them to be used as components. This is no longer the case and such invalid code will trigger a compile error.
Co-authored-by: = <=>
Co-authored-by: TheRawMeatball <therawmeatball@gmail.com>
Co-authored-by: Carter Anderson <mcanders1@gmail.com>
# Objective
- Remove all the `.system()` possible.
- Check for remaining missing cases.
## Solution
- Remove all `.system()`, fix compile errors
- 32 calls to `.system()` remains, mostly internals, the few others should be removed after #2446
This is extracted out of eb8f973646476b4a4926ba644a77e2b3a5772159 and includes some additional changes to remove all references to AppBuilder and fix examples that still used App::build() instead of App::new(). In addition I didn't extract the sub app feature as it isn't ready yet.
You can use `git diff --diff-filter=M eb8f973646476b4a4926ba644a77e2b3a5772159` to find all differences in this PR. The `--diff-filtered=M` filters all files added in the original commit but not in this commit away.
Co-Authored-By: Carter Anderson <mcanders1@gmail.com>
This relicenses Bevy under the dual MIT or Apache-2.0 license. For rationale, see #2373.
* Changes the LICENSE file to describe the dual license. Moved the MIT license to docs/LICENSE-MIT. Added the Apache-2.0 license to docs/LICENSE-APACHE. I opted for this approach over dumping both license files at the root (the more common approach) for a number of reasons:
* Github links to the "first" license file (LICENSE-APACHE) in its license links (you can see this in the wgpu and rust-analyzer repos). People clicking these links might erroneously think that the apache license is the only option. Rust and Amethyst both use COPYRIGHT or COPYING files to solve this problem, but this creates more file noise (if you do everything at the root) and the naming feels way less intuitive.
* People have a reflex to look for a LICENSE file. By providing a single license file at the root, we make it easy for them to understand our licensing approach.
* I like keeping the root clean and noise free
* There is precedent for putting the apache and mit license text in sub folders (amethyst)
* Removed the `Copyright (c) 2020 Carter Anderson` copyright notice from the MIT license. I don't care about this attribution, it might make license compliance more difficult in some cases, and it didn't properly attribute other contributors. We shoudn't replace it with something like "Copyright (c) 2021 Bevy Contributors" because "Bevy Contributors" is not a legal entity. Instead, we just won't include the copyright line (which has precedent ... Rust also uses this approach).
* Updates crates to use the new "MIT OR Apache-2.0" license value
* Removes the old legion-transform license file from bevy_transform. bevy_transform has been its own, fully custom implementation for a long time and that license no longer applies.
* Added a License section to the main readme
* Updated our Bevy Plugin licensing guidelines.
As a follow-up we should update the website to properly describe the new license.
Closes#2373
# Objective
Reduce compilation time
# Solution
Remove unused dependencies. While this PR doesn't remove any crates from `Cargo.lock`, it may unlock more build parallelism.
Fixes the frag shader for unlit materials by correcting the scope of the `#ifndef` to include the light functions. Closes#2190, introduced in #2112.
Tested by changing materials in the the `3d_scene` example to be unlit. Unsure how to prevent future regressions without creating a test case scene that will catch these runtime panics.
This gets rid of multiple unsafe blocks that we had to maintain ourselves, and instead depends on library that's commonly used and supported by the ecosystem. We also get support for glam types for free.
There is still some things to clear up with the `Bytes` trait, but that is a bit more substantial change and can be done separately. Also there are already separate efforts to use `crevice` crate, so I've just added that as a TODO.
Changes to get Bevy to compile with wgpu master.
With this, on a Mac:
* 2d examples look fine
* ~~3d examples crash with an error specific to metal about a compilation error~~
* 3d examples work fine after enabling feature `wgpu/cross`
Feature `wgpu/cross` seems to be needed only on some platforms, not sure how to know which. It was introduced in https://github.com/gfx-rs/wgpu-rs/pull/826