# Objective
Fixes `StandardMaterial` texture update (see sample code below).
Most probably fixes#3674 (did not test)
## Solution
Material updates, such as PBR update, reference the underlying `GpuImage`. Like here: 9a7852db0f/crates/bevy_pbr/src/pbr_material.rs (L177)
However, currently the `GpuImage` update may actually happen *after* the material update fetches the gpu image. Resulting in the material actually not being updated for the correct gpu image.
In this pull req, I introduce new systemlabels for the renderassetplugin. Also assigned the RenderAssetPlugin::<Image> to the `PreAssetExtract` stage, so that it is executed before any material updates.
Code to test.
Expected behavior:
* should update to red texture
Unexpected behavior (before this merge):
* texture stays randomly as green one (depending on the execution order of systems)
```rust
use bevy::{
prelude::*,
render::render_resource::{Extent3d, TextureDimension, TextureFormat},
};
fn main() {
App::new()
.add_plugins(DefaultPlugins)
.add_startup_system(setup)
.add_system(changes)
.run();
}
struct Iteration(usize);
#[derive(Component)]
struct MyComponent;
fn setup(
mut commands: Commands,
mut meshes: ResMut<Assets<Mesh>>,
mut materials: ResMut<Assets<StandardMaterial>>,
mut images: ResMut<Assets<Image>>,
) {
commands.spawn_bundle(PointLightBundle {
point_light: PointLight {
..Default::default()
},
transform: Transform::from_xyz(4.0, 8.0, 4.0),
..Default::default()
});
commands.spawn_bundle(PerspectiveCameraBundle {
transform: Transform::from_xyz(-2.0, 0.0, 5.0)
.looking_at(Vec3::new(0.0, 0.0, 0.0), Vec3::Y),
..Default::default()
});
commands.insert_resource(Iteration(0));
commands
.spawn_bundle(PbrBundle {
mesh: meshes.add(Mesh::from(shape::Quad::new(Vec2::new(3., 2.)))),
material: materials.add(StandardMaterial {
base_color_texture: Some(images.add(Image::new(
Extent3d {
width: 600,
height: 400,
depth_or_array_layers: 1,
},
TextureDimension::D2,
[0, 255, 0, 128].repeat(600 * 400), // GREEN
TextureFormat::Rgba8Unorm,
))),
..Default::default()
}),
..Default::default()
})
.insert(MyComponent);
}
fn changes(
mut materials: ResMut<Assets<StandardMaterial>>,
mut images: ResMut<Assets<Image>>,
mut iteration: ResMut<Iteration>,
webview_query: Query<&Handle<StandardMaterial>, With<MyComponent>>,
) {
if iteration.0 == 2 {
let material = materials.get_mut(webview_query.single()).unwrap();
let image = images
.get_mut(material.base_color_texture.as_ref().unwrap())
.unwrap();
image
.data
.copy_from_slice(&[255, 0, 0, 255].repeat(600 * 400));
}
iteration.0 += 1;
}
```
Co-authored-by: Carter Anderson <mcanders1@gmail.com>
# Objective
Load skeletal weights and indices from GLTF files. Animate meshes.
## Solution
- Load skeletal weights and indices from GLTF files.
- Added `SkinnedMesh` component and ` SkinnedMeshInverseBindPose` asset
- Added `extract_skinned_meshes` to extract joint matrices.
- Added queue phase systems for enqueuing the buffer writes.
Some notes:
- This ports part of # #2359 to the current main.
- This generates new `BufferVec`s and bind groups every frame. The expectation here is that the number of `Query::get` calls during extract is probably going to be the stronger bottleneck, with up to 256 calls per skinned mesh. Until that is optimized, caching buffers and bind groups is probably a non-concern.
- Unfortunately, due to the uniform size requirements, this means a 16KB buffer is allocated for every skinned mesh every frame. There's probably a few ways to get around this, but most of them require either compute shaders or storage buffers, which are both incompatible with WebGL2.
Co-authored-by: james7132 <contact@jamessliu.com>
Co-authored-by: François <mockersf@gmail.com>
Co-authored-by: James Liu <contact@jamessliu.com>
# Objective
- Fixes#3970
- To support Bevy's shader abstraction(shader defs, shader imports and hot shader reloading) for compute shaders, I have followed carts advice and change the `PipelinenCache` to accommodate both compute and render pipelines.
## Solution
- renamed `RenderPipelineCache` to `PipelineCache`
- Cached Pipelines are now represented by an enum (render, compute)
- split the `SpecializedPipelines` into `SpecializedRenderPipelines` and `SpecializedComputePipelines`
- updated the game of life example
## Open Questions
- should `SpecializedRenderPipelines` and `SpecializedComputePipelines` be merged and how would we do that?
- should the `get_render_pipeline` and `get_compute_pipeline` methods be merged?
- is pipeline specialization for different entry points a good pattern
Co-authored-by: Kurt Kühnert <51823519+Ku95@users.noreply.github.com>
Co-authored-by: Carter Anderson <mcanders1@gmail.com>
# Objective
- Add a helper for storage buffers similar to `UniformVec`
## Solution
- Add a `StorageBuffer<T, U>` where `T` is the main body of the shader struct without any final variable-sized array member, and `U` is the type of the items in a variable-sized array.
- Use `()` as the type for unwanted parts, e.g. `StorageBuffer<(), Vec4>::default()` would construct a binding that would work with `struct MyType { data: array<vec4<f32>>; }` in WGSL and `StorageBuffer<MyType, ()>::default()` would work with `struct MyType { ... }` in WGSL as long as there are no variable-sized arrays.
- Std430 requires that there is at most one variable-sized array in a storage buffer, that if there is one it is the last member of the binding, and that it has at least one item. `StorageBuffer` handles all of these constraints.
Add support for removing nodes, edges, and subgraphs. This enables live re-wiring of the render graph.
This was something I did to support the MSAA implementation, but it turned out to be unnecessary there. However, it is still useful so here it is in its own PR.
Co-authored-by: Carter Anderson <mcanders1@gmail.com>
# Objective
When loading a gltf scene with a camera, bevy will panic at ``thread 'main' panicked at 'scene contains the unregistered type `bevy_render:📷:bundle::Camera3d`. consider registering the type using `app.register_type::<T>()`', /home/jakob/dev/rust/contrib/bevy/bevy/crates/bevy_scene/src/scene_spawner.rs:332:35``.
## Solution
Register the camera types to fix the panic.
# Objective
- Reduce time spent in the `check_visibility` system
## Solution
- Use `Vec3A` for all bounding volume types to leverage SIMD optimisations and to avoid repeated runtime conversions from `Vec3` to `Vec3A`
- Inline all bounding volume intersection methods
- Add on-the-fly calculated `Aabb` -> `Sphere` and do `Sphere`-`Frustum` intersection tests before `Aabb`-`Frustum` tests. This is faster for `many_cubes` but could be slower in other cases where the sphere test gives a false-positive that the `Aabb` test discards. Also, I tested precalculating the `Sphere`s and inserting them alongside the `Aabb` but this was slower.
- Do not test meshes against the far plane. Apparently games don't do this anymore with infinite projections, and it's one fewer plane to test against. I made it optional and still do the test for culling lights but that is up for discussion.
- These collectively reduce `check_visibility` execution time in `many_cubes -- sphere` from 2.76ms to 1.48ms and increase frame rate from ~42fps to ~44fps
Tracing added support for "inline span entering", which cuts down on a lot of complexity:
```rust
let span = info_span!("my_span").entered();
```
This adapts our code to use this pattern where possible, and updates our docs to recommend it.
This produces equivalent tracing behavior. Here is a side by side profile of "before" and "after" these changes.
![image](https://user-images.githubusercontent.com/2694663/158912137-b0aa6dc8-c603-425f-880f-6ccf5ad1b7ef.png)
# Objective
- Support compressed textures including 'universal' formats (ETC1S, UASTC) and transcoding of them to
- Support `.dds`, `.ktx2`, and `.basis` files
## Solution
- Fixes https://github.com/bevyengine/bevy/issues/3608 Look there for more details.
- Note that the functionality is all enabled through non-default features. If it is desirable to enable some by default, I can do that.
- The `basis-universal` crate, used for `.basis` file support and for transcoding, is built on bindings against a C++ library. It's not feasible to rewrite in Rust in a short amount of time. There are no Rust alternatives of which I am aware and it's specialised code. In its current state it doesn't support the wasm target, but I don't know for sure. However, it is possible to build the upstream C++ library with emscripten, so there is perhaps a way to add support for web too with some shenanigans.
- There's no support for transcoding from BasisLZ/ETC1S in KTX2 files as it was quite non-trivial to implement and didn't feel important given people could use `.basis` files for ETC1S.
# Objective
- Fixes#3300
- `RunSystem` is messy
## Solution
- Adds the trick theorised in https://github.com/bevyengine/bevy/issues/3300#issuecomment-991791234
P.S. I also want this for an experimental refactoring of `Assets`, to remove the duplication of `Events<AssetEvent<T>>`
Co-authored-by: Carter Anderson <mcanders1@gmail.com>
# Objective
- Hierarchy tools are not just used for `Transform`: they are also used for scenes.
- In the future there's interest in using them for other features, such as visiibility inheritance.
- The fact that these tools are found in `bevy_transform` causes a great deal of user and developer confusion
- Fixes#2758.
## Solution
- Split `bevy_transform` into two!
- Make everything work again.
Note that this is a very tightly scoped PR: I *know* there are code quality and docs issues that existed in bevy_transform that I've just moved around. We should fix those in a seperate PR and try to merge this ASAP to reduce the bitrot involved in splitting an entire crate.
## Frustrations
The API around `GlobalTransform` is a mess: we have massive code and docs duplication, no link between the two types and no clear way to extend this to other forms of inheritance.
In the medium-term, I feel pretty strongly that `GlobalTransform` should be replaced by something like `Inherited<Transform>`, which lives in `bevy_hierarchy`:
- avoids code duplication
- makes the inheritance pattern extensible
- links the types at the type-level
- allows us to remove all references to inheritance from `bevy_transform`, making it more useful as a standalone crate and cleaning up its docs
## Additional context
- double-blessed by @cart in https://github.com/bevyengine/bevy/issues/4141#issuecomment-1063592414 and https://github.com/bevyengine/bevy/issues/2758#issuecomment-913810963
- preparation for more advanced / cleaner hierarchy tools: go read https://github.com/bevyengine/rfcs/pull/53 !
- originally attempted by @finegeometer in #2789. It was a great idea, just needed more discussion!
Co-authored-by: Carter Anderson <mcanders1@gmail.com>
**Problem**
- whenever you want more than one of the builtin cameras (for example multiple windows, split screen, portals), you need to add a render graph node that executes the correct sub graph, extract the camera into the render world and add the correct `RenderPhase<T>` components
- querying for the 3d camera is annoying because you need to compare the camera's name to e.g. `CameraPlugin::CAMERA_3d`
**Solution**
- Introduce the marker types `Camera3d`, `Camera2d` and `CameraUi`
-> `Query<&mut Transform, With<Camera3d>>` works
- `PerspectiveCameraBundle::new_3d()` and `PerspectiveCameraBundle::<Camera3d>::default()` contain the `Camera3d` marker
- `OrthographicCameraBundle::new_3d()` has `Camera3d`, `OrthographicCameraBundle::new_2d()` has `Camera2d`
- remove `ActiveCameras`, `ExtractedCameraNames`
- run 2d, 3d and ui passes for every camera of their respective marker
-> no custom setup for multiple windows example needed
**Open questions**
- do we need a replacement for `ActiveCameras`? What about a component `ActiveCamera { is_active: bool }` similar to `Visibility`?
Co-authored-by: Carter Anderson <mcanders1@gmail.com>
# Objective
- Make insertion of uniform components faster
## Solution
- Use batch insertion in the prepare_uniform_components system
- Improves `many_cubes -- sphere` from ~42fps to ~43fps
Co-authored-by: François <mockersf@gmail.com>
# Objective
Fixes#3744
## Solution
The old code used the formula `normal . center + d + radius <= 0` to determine if the sphere with center `center` and radius `radius` is outside the plane with normal `normal` and distance from origin `d`. This only works if `normal` is normalized, which is not necessarily the case. Instead, `normal` and `d` are both multiplied by some factor that `radius` isn't multiplied by. So the additional code multiplied `radius` by that factor.
# Objective
Currently, errors in the render graph runner are exposed via a `Result::unwrap()` panic message, which dumps the debug representation of the error.
## Solution
This PR updates `render_system` to log the chain of errors, followed by an explicit panic:
```
ERROR bevy_render::renderer: Error running render graph:
ERROR bevy_render::renderer: > encountered an error when running a sub-graph
ERROR bevy_render::renderer: > tried to pass inputs to sub-graph "outline_graph", which has no input slots
thread 'main' panicked at 'Error running render graph: encountered an error when running a sub-graph', /[redacted]/bevy/crates/bevy_render/src/renderer/mod.rs:44:9
```
Some errors' `Display` impls (via `thiserror`) have also been updated to provide more detail about the cause of the error.
# Objective
- Currently there is now way of making an indirect draw call from a tracked render pass.
- This is a very useful feature for GPU based rendering.
## Solution
- Expose the `draw_indirect` and `draw_indexed_indirect` methods from the wgpu `RenderPass` in the `TrackedRenderPass`.
## Alternative
- #3595: Expose the underlying `RenderPass` directly
# Objective
- In the large majority of cases, users were calling `.unwrap()` immediately after `.get_resource`.
- Attempting to add more helpful error messages here resulted in endless manual boilerplate (see #3899 and the linked PRs).
## Solution
- Add an infallible variant named `.resource` and so on.
- Use these infallible variants over `.get_resource().unwrap()` across the code base.
## Notes
I did not provide equivalent methods on `WorldCell`, in favor of removing it entirely in #3939.
## Migration Guide
Infallible variants of `.get_resource` have been added that implicitly panic, rather than needing to be unwrapped.
Replace `world.get_resource::<Foo>().unwrap()` with `world.resource::<Foo>()`.
## Impact
- `.unwrap` search results before: 1084
- `.unwrap` search results after: 942
- internal `unwrap_or_else` calls added: 4
- trivial unwrap calls removed from tests and code: 146
- uses of the new `try_get_resource` API: 11
- percentage of the time the unwrapping API was used internally: 93%
# Objective
Will fix#3377 and #3254
## Solution
Use an enum to represent either a `WindowId` or `Handle<Image>` in place of `Camera::window`.
Co-authored-by: Carter Anderson <mcanders1@gmail.com>
This PR makes a number of changes to how meshes and vertex attributes are handled, which the goal of enabling easy and flexible custom vertex attributes:
* Reworks the `Mesh` type to use the newly added `VertexAttribute` internally
* `VertexAttribute` defines the name, a unique `VertexAttributeId`, and a `VertexFormat`
* `VertexAttributeId` is used to produce consistent sort orders for vertex buffer generation, replacing the more expensive and often surprising "name based sorting"
* Meshes can be used to generate a `MeshVertexBufferLayout`, which defines the layout of the gpu buffer produced by the mesh. `MeshVertexBufferLayouts` can then be used to generate actual `VertexBufferLayouts` according to the requirements of a specific pipeline. This decoupling of "mesh layout" vs "pipeline vertex buffer layout" is what enables custom attributes. We don't need to standardize _mesh layouts_ or contort meshes to meet the needs of a specific pipeline. As long as the mesh has what the pipeline needs, it will work transparently.
* Mesh-based pipelines now specialize on `&MeshVertexBufferLayout` via the new `SpecializedMeshPipeline` trait (which behaves like `SpecializedPipeline`, but adds `&MeshVertexBufferLayout`). The integrity of the pipeline cache is maintained because the `MeshVertexBufferLayout` is treated as part of the key (which is fully abstracted from implementers of the trait ... no need to add any additional info to the specialization key).
* Hashing `MeshVertexBufferLayout` is too expensive to do for every entity, every frame. To make this scalable, I added a generalized "pre-hashing" solution to `bevy_utils`: `Hashed<T>` keys and `PreHashMap<K, V>` (which uses `Hashed<T>` internally) . Why didn't I just do the quick and dirty in-place "pre-compute hash and use that u64 as a key in a hashmap" that we've done in the past? Because its wrong! Hashes by themselves aren't enough because two different values can produce the same hash. Re-hashing a hash is even worse! I decided to build a generalized solution because this pattern has come up in the past and we've chosen to do the wrong thing. Now we can do the right thing! This did unfortunately require pulling in `hashbrown` and using that in `bevy_utils`, because avoiding re-hashes requires the `raw_entry_mut` api, which isn't stabilized yet (and may never be ... `entry_ref` has favor now, but also isn't available yet). If std's HashMap ever provides the tools we need, we can move back to that. Note that adding `hashbrown` doesn't increase our dependency count because it was already in our tree. I will probably break these changes out into their own PR.
* Specializing on `MeshVertexBufferLayout` has one non-obvious behavior: it can produce identical pipelines for two different MeshVertexBufferLayouts. To optimize the number of active pipelines / reduce re-binds while drawing, I de-duplicate pipelines post-specialization using the final `VertexBufferLayout` as the key. For example, consider a pipeline that needs the layout `(position, normal)` and is specialized using two meshes: `(position, normal, uv)` and `(position, normal, other_vec2)`. If both of these meshes result in `(position, normal)` specializations, we can use the same pipeline! Now we do. Cool!
To briefly illustrate, this is what the relevant section of `MeshPipeline`'s specialization code looks like now:
```rust
impl SpecializedMeshPipeline for MeshPipeline {
type Key = MeshPipelineKey;
fn specialize(
&self,
key: Self::Key,
layout: &MeshVertexBufferLayout,
) -> RenderPipelineDescriptor {
let mut vertex_attributes = vec![
Mesh::ATTRIBUTE_POSITION.at_shader_location(0),
Mesh::ATTRIBUTE_NORMAL.at_shader_location(1),
Mesh::ATTRIBUTE_UV_0.at_shader_location(2),
];
let mut shader_defs = Vec::new();
if layout.contains(Mesh::ATTRIBUTE_TANGENT) {
shader_defs.push(String::from("VERTEX_TANGENTS"));
vertex_attributes.push(Mesh::ATTRIBUTE_TANGENT.at_shader_location(3));
}
let vertex_buffer_layout = layout
.get_layout(&vertex_attributes)
.expect("Mesh is missing a vertex attribute");
```
Notice that this is _much_ simpler than it was before. And now any mesh with any layout can be used with this pipeline, provided it has vertex postions, normals, and uvs. We even got to remove `HAS_TANGENTS` from MeshPipelineKey and `has_tangents` from `GpuMesh`, because that information is redundant with `MeshVertexBufferLayout`.
This is still a draft because I still need to:
* Add more docs
* Experiment with adding error handling to mesh pipeline specialization (which would print errors at runtime when a mesh is missing a vertex attribute required by a pipeline). If it doesn't tank perf, we'll keep it.
* Consider breaking out the PreHash / hashbrown changes into a separate PR.
* Add an example illustrating this change
* Verify that the "mesh-specialized pipeline de-duplication code" works properly
Please dont yell at me for not doing these things yet :) Just trying to get this in peoples' hands asap.
Alternative to #3120Fixes#3030
Co-authored-by: Carter Anderson <mcanders1@gmail.com>
Adds "hot reloading" of internal assets, which is normally not possible because they are loaded using `include_str` / direct Asset collection access.
This is accomplished via the following:
* Add a new `debug_asset_server` feature flag
* When that feature flag is enabled, create a second App with a second AssetServer that points to a configured location (by default the `crates` folder). Plugins that want to add hot reloading support for their assets can call the new `app.add_debug_asset::<T>()` and `app.init_debug_asset_loader::<T>()` functions.
* Load "internal" assets using the new `load_internal_asset` macro. By default this is identical to the current "include_str + register in asset collection" approach. But if the `debug_asset_server` feature flag is enabled, it will also load the asset dynamically in the debug asset server using the file path. It will then set up a correlation between the "debug asset" and the "actual asset" by listening for asset change events.
This is an alternative to #3673. The goal was to keep the boilerplate and features flags to a minimum for bevy plugin authors, and allow them to home their shaders near relevant code.
This is a draft because I haven't done _any_ quality control on this yet. I'll probably rename things and remove a bunch of unwraps. I just got it working and wanted to use it to start a conversation.
Fixes#3660
This enables shaders to (optionally) define their import path inside their source. This has a number of benefits:
1. enables users to define their own custom paths directly in their assets
2. moves the import path "close" to the asset instead of centralized in the plugin definition, which seems "better" to me.
3. makes "internal hot shader reloading" way more reasonable (see #3966)
4. logically opens the door to importing "parts" of a shader by defining "import_path blocks".
```rust
#define_import_path bevy_pbr::mesh_struct
struct Mesh {
model: mat4x4<f32>;
inverse_transpose_model: mat4x4<f32>;
// 'flags' is a bit field indicating various options. u32 is 32 bits so we have up to 32 options.
flags: u32;
};
let MESH_FLAGS_SHADOW_RECEIVER_BIT: u32 = 1u;
```
For some keys, it is too expensive to hash them on every lookup. Historically in Bevy, we have regrettably done the "wrong" thing in these cases (pre-computing hashes, then re-hashing them) because Rust's built in hashed collections don't give us the tools we need to do otherwise. Doing this is "wrong" because two different values can result in the same hash. Hashed collections generally get around this by falling back to equality checks on hash collisions. You can't do that if the key _is_ the hash. Additionally, re-hashing a hash increase the odds of collision!
#3959 needs pre-hashing to be viable, so I decided to finally properly solve the problem. The solution involves two different changes:
1. A new generalized "pre-hashing" solution in bevy_utils: `Hashed<T>` types, which store a value alongside a pre-computed hash. And `PreHashMap<K, V>` (which uses `Hashed<T>` internally) . `PreHashMap` is just an alias for a normal HashMap that uses `Hashed<T>` as the key and a new `PassHash` implementation as the Hasher.
2. Replacing the `std::collections` re-exports in `bevy_utils` with equivalent `hashbrown` impls. Avoiding re-hashes requires the `raw_entry_mut` api, which isn't stabilized yet (and may never be ... `entry_ref` has favor now, but also isn't available yet). If std's HashMap ever provides the tools we need, we can move back to that. The latest version of `hashbrown` adds support for the `entity_ref` api, so we can move to that in preparation for an std migration, if thats the direction they seem to be going in. Note that adding hashbrown doesn't increase our dependency count because it was already in our tree.
In addition to providing these core tools, I also ported the "table identity hashing" in `bevy_ecs` to `raw_entry_mut`, which was a particularly egregious case.
The biggest outstanding case is `AssetPathId`, which stores a pre-hash. We need AssetPathId to be cheaply clone-able (and ideally Copy), but `Hashed<AssetPath>` requires ownership of the AssetPath, which makes cloning ids way more expensive. We could consider doing `Hashed<Arc<AssetPath>>`, but cloning an arc is still a non-trivial expensive that needs to be considered. I would like to handle this in a separate PR. And given that we will be re-evaluating the Bevy Assets implementation in the very near future, I'd prefer to hold off until after that conversation is concluded.
# Objective
- `WgpuOptions` is mutated to be updated with the actual device limits and features, but this information is readily available to both the main and render worlds through the `RenderDevice` which has .limits() and .features() methods
- Information about the adapter in terms of its name, the backend in use, etc were not being exposed but have clear use cases for being used to take decisions about what rendering code to use. For example, if something works well on AMD GPUs but poorly on Intel GPUs. Or perhaps something works well in Vulkan but poorly in DX12.
## Solution
- Stop mutating `WgpuOptions `and don't insert the updated values into the main and render worlds
- Return `AdapterInfo` from `initialize_renderer` and insert it into the main and render worlds
- Use `RenderDevice` limits in the lighting code that was using `WgpuOptions.limits`.
- Renamed `WgpuOptions` to `WgpuSettings`
What is says on the tin.
This has got more to do with making `clippy` slightly more *quiet* than it does with changing anything that might greatly impact readability or performance.
that said, deriving `Default` for a couple of structs is a nice easy win
# Objective
- Support overriding wgpu features and limits that were calculated from default values or queried from the adapter/backend.
- Fixes#3686
## Solution
- Add `disabled_features: Option<wgpu::Features>` to `WgpuOptions`
- Add `constrained_limits: Option<wgpu::Limits>` to `WgpuOptions`
- After maybe obtaining updated features and limits from the adapter/backend in the case of `WgpuOptionsPriority::Functionality`, enable the `WgpuOptions` `features`, disable the `disabled_features`, and constrain the `limits` by `constrained_limits`.
- Note that constraining the limits means for `wgpu::Limits` members named `max_.*` we take the minimum of that which was configured/queried for the backend/adapter and the specified constrained limit value. This means the configured/queried value is used if the constrained limit is larger as that is as much as the device/API supports, or the constrained limit value is used if it is smaller as we are imposing an artificial constraint. For members named `min_.*` we take the maximum instead. For example, a minimum stride might be 256 but we set constrained limit value of 1024, then 1024 is the more conservative value. If the constrained limit value were 16, then 256 would be the more conservative.
# Objective
If a user attempts to `.add_render_command::<P, C>()` on a world that does not contain `DrawFunctions<P>`, the engine panics with a generic `Option::unwrap` message:
```
thread 'main' panicked at 'called `Option::unwrap()` on a `None` value', /[redacted]/bevy/crates/bevy_render/src/render_phase/draw.rs:318:76
```
## Solution
This PR adds a panic message describing the problem:
```
thread 'main' panicked at 'DrawFunctions<outline::MeshStencil> must be added to the world as a resource before adding render commands to it', /[redacted]/bevy/crates/bevy_render/src/render_phase/draw.rs:322:17
```
# Objective
The documentation was unclear but it seemed like it was intended to _only_ flip the texture coordinates of the quad. However, it was also swapping the vertex positions, which resulted in inverted winding order so the front became a back face, and the normal was pointing into the face instead of out of it.
## Solution
- This change makes the only difference the UVs being horizontally flipped.
(cherry picked from commit de943381bd2a8b242c94db99e6c7bbd70006d7c3)
# Objective
The view uniform lacks view transform information. The inverse transform is currently provided but this is not sufficient if you do not have access to an `inverse` function (such as in WGSL).
## Solution
Grab the view transform, put it in the view uniform, use the same matrix to compute the inverse as well.
# Objective
The docs for `{VertexState, FragmentState}::entry_point` stipulate that the entry point function in the shader must return void. This seems to be specific to GLSL; WGSL has no `void` type and its entry point functions return values that describe their output.
## Solution
Remove the mention of the `void` return type.
# Objective
Enable the user to specify any presentation modes (including `Mailbox`).
Fixes#3807
## Solution
I've added a new `PresentMode` enum in `bevy_window` that mirrors the `wgpu` enum 1:1. Alternatively, I could add a new dependency on `wgpu-types` if that would be preferred.
## Objective
When print shader validation error messages, we didn't print the sources and error message text, which led to some confusing error messages.
```cs
error:
┌─ wgsl:15:11
│
15 │ return material.color + 1u;
│ ^^^^^^^^^^^^^^^^^^^^ naga::Expression [11]
```
## Solution
New error message:
```cs
error: Entry point fragment at Vertex is invalid
┌─ wgsl:15:11
│
15 │ return material.color + 1u;
│ ^^^^^^^^^^^^^^^^^^^^ naga::Expression [11]
│
= Expression [11] is invalid
= Operation Add can't work with [8] and [10]
```
# Objective
Add a simple way for user to get the size of a loaded texture in an Image object.
Aims to solve #3689
## Solution
Add a `size() -> Vec2` method
Add two simple tests for this method.
Updates:
. method named changed from `size_2d` to `size`
# Objective
- While it is not safe to enable mappable primary buffers for all GPUs, it should be preferred for integrated GPUs where an integrated GPU is one that is sharing system memory.
## Solution
- Auto-disable mappable primary buffers only for discrete GPUs. If the GPU is integrated and mappable primary buffers are supported, use them.
# Objective
In order to create a glsl shader, we must provide the `naga::ShaderStage` type which is not exported by bevy, meaning a user would have to manually include naga just to access this type.
`pub fn from_glsl(source: impl Into<Cow<'static, str>>, stage: naga::ShaderStage) -> Shader {`
## Solution
Re-rexport naga::ShaderStage from `render_resources`
# Objective
- Allow opting-out of the built-in frustum culling for cases where its behaviour would be incorrect
- Make use of the this in the shader_instancing example that uses a custom instancing method. The built-in frustum culling breaks the custom instancing in the shader_instancing example if the camera is moved to:
```rust
commands.spawn_bundle(PerspectiveCameraBundle {
transform: Transform::from_xyz(12.0, 0.0, 15.0)
.looking_at(Vec3::new(12.0, 0.0, 0.0), Vec3::Y),
..Default::default()
});
```
...such that the Aabb of the cube Mesh that is at the origin goes completely out of view. This incorrectly (for the purpose of the custom instancing) culls the `Mesh` and so culls all instances even though some may be visible.
## Solution
- Add a `NoFrustumCulling` marker component
- Do not compute and add an `Aabb` to `Mesh` entities without an `Aabb` if they have a `NoFrustumCulling` marker component
- Do not apply frustum culling to entities with the `NoFrustumCulling` marker component
# Objective
- When using `WgpuOptionsPriority::Functionality`, which is the default, wgpu::Features::MAPPABLE_PRIMARY_BUFFERS would be automatically enabled. This feature can and does have a significant negative impact on performance for discrete GPUs where resizable bar is not supported, which is a common case. As such, this feature should not be automatically enabled.
- Fixes the performance regression part of https://github.com/bevyengine/bevy/issues/3686 and at least some, if not all cases of https://github.com/bevyengine/bevy/issues/3687
## Solution
- When using `WgpuOptionsPriority::Functionality`, use the adapter-supported features, enable `TEXTURE_ADAPTER_SPECIFIC_FORMAT_FEATURES` and disable `MAPPABLE_PRIMARY_BUFFERS`
Fixed doc comment where render Node input/output methods refered to using `RenderContext` for interaction instead of `RenderGraphContext`
# Objective
The doc comments for `Node` refer to `RenderContext` for slots instead of `RenderGraphContext`, which is only confusing because `Node::run` is passed both `RenderContext` and `RenderGraphContext`
## Solution
Fixed the typo
Super tiny thing. Found this while reviewing #3479.
# Objective
- Simplify code
- Fix the link in the doc comment
## Solution
- Import a single item :)
Co-authored-by: Pascal Hertleif <pascal@technocreatives.com>
# Objective
CI should check for missing backticks in doc comments.
Fixes#3435
## Solution
`clippy` has a lint for this: `doc_markdown`. This enables that lint in the CI script.
Of course, enabling this lint in CI causes a bunch of lint errors, so I've gone through and fixed all of them. This was a huge edit that touched a ton of files, so I split the PR up by crate.
When all of the following are merged, the CI should pass and this can be merged.
+ [x] #3467
+ [x] #3468
+ [x] #3470
+ [x] #3469
+ [x] #3471
+ [x] #3472
+ [x] #3473
+ [x] #3474
+ [x] #3475
+ [x] #3476
+ [x] #3477
+ [x] #3478
+ [x] #3479
+ [x] #3480
+ [x] #3481
+ [x] #3482
+ [x] #3483
+ [x] #3484
+ [x] #3485
+ [x] #3486
#3457 adds the `doc_markdown` clippy lint, which checks doc comments to make sure code identifiers are escaped with backticks. This causes a lot of lint errors, so this is one of a number of PR's that will fix those lint errors one crate at a time.
This PR fixes lints in the `bevy_render` crate.
# Objective
In this PR I added the ability to opt-out graphical backends. Closes#3155.
## Solution
I turned backends into `Option` ~~and removed panicking sub app API to force users handle the error (was suggested by `@cart`)~~.
# Objective
The current 2d rendering is specialized to render sprites, we need a generic way to render 2d items, using meshes and materials like we have for 3d.
## Solution
I cloned a good part of `bevy_pbr` into `bevy_sprite/src/mesh2d`, removed lighting and pbr itself, adapted it to 2d rendering, added a `ColorMaterial`, and modified the sprite rendering to break batches around 2d meshes.
~~The PR is a bit crude; I tried to change as little as I could in both the parts copied from 3d and the current sprite rendering to make reviewing easier. In the future, I expect we could make the sprite rendering a normal 2d material, cleanly integrated with the rest.~~ _edit: see <https://github.com/bevyengine/bevy/pull/3460#issuecomment-1003605194>_
## Remaining work
- ~~don't require mesh normals~~ _out of scope_
- ~~add an example~~ _done_
- support 2d meshes & materials in the UI?
- bikeshed names (I didn't think hard about naming, please check if it's fine)
## Remaining questions
- ~~should we add a depth buffer to 2d now that there are 2d meshes?~~ _let's revisit that when we have an opaque render phase_
- ~~should we add MSAA support to the sprites, or remove it from the 2d meshes?~~ _I added MSAA to sprites since it's really needed for 2d meshes_
- ~~how to customize vertex attributes?~~ _#3120_
Co-authored-by: Carter Anderson <mcanders1@gmail.com>
# Objective
- Allow the user to specify the priority when configuring wgpu features/limits and by default use the maximum capabilities of the chosen adapter.
## Solution
- Add a `WgpuOptionsPriority` enum with `Compatibility`, `Functionality` and `WebGL2` options.
- Add a `priority: WgpuOptionsPriority` member to `WgpuOptions`.
- When initialising the renderer, if `WgpuOptions::priority == WgpuOptionsPriority::Functionality`, query the adapter for the available features and limits, use them when creating a device, and update `WgpuOptions` with those values. If `Compatibility` use the behaviour as before this PR. If `WebGL2` then use the WebGL2 downlevel limits as used when when building for wasm, for convenience of testing WebGL2 limits without having to build for wasm.
- Add an environment variable `WGPU_OPTIONS_PRIO` that takes `compatibility`, `functionality`, `webgl2`.
- Default to `WgpuOptionsPriority::Functionality`.
- Insert updated `WgpuOptions` into render app world as well. This is useful for applying the limits when rendering, such as limiting the directional light shadow map texture to 2048x2048 when using WebGL2 downlevel limits but not on wasm.
- Reduced `draw_state` logs from `debug` to `trace` and added `debug` level logs for the wgpu features and limits. Use `RUST_LOG=bevy_render=debug` to see the output.
# Objective
- Add support for loading lights from glTF 2.0 files
## Solution
- This adds support for the KHR_punctual_lights extension which supports point, directional, and spot lights, though we don't yet support spot lights.
- Inserting light bundles when creating scenes required registering some more light bundle component types.
This PR is part of the issue #3492.
# Objective
- Clean up dead code in `bevy_core`.
- Add and update the `bevy_core` documentation to achieve a 100% documentation coverage.
- Add the #![warn(missing_docs)] lint to keep the documentation coverage for the future.
# Solution
- Remove unused `Bytes`, `FromBytes`, `Labels`, and `EntityLabels` types and associated systems.
- Made several types private that really only have use as internal types, mostly pertaining to fixed timestep execution.
- Add and update the bevy_core documentation.
- Add the #![warn(missing_docs)] lint.
# Open Questions
Should more of the internal states of `FixedTimestep` be public? Seems mostly to be an implementation detail unless someone really needs that fixed timestep state.
# Objective
Docs updates.
## Solution
- Detail what `OrthographicCameraBundle::new_2d()` creates.
- Fix a few renamed parameters in comments of `TrackedRenderPass`.
- Add missing comments for viewport and debug markers.
Co-authored-by: Jerome Humbert <djeedai@gmail.com>
# Objective
- `Msaa` was disabled in webgl due to a bug in wgpu
- Bug has been fixed (https://github.com/gfx-rs/wgpu/pull/2307) and backported (https://github.com/gfx-rs/wgpu/pull/2327), and updates for [`wgpu-core`](https://crates.io/crates/wgpu-core/0.12.1) and [`wgpu-hal`](https://crates.io/crates/wgpu-hal/0.12.1) have been released
## Solution
- Remove custom config for `Msaa` in webgl
- I also changed two options that were using the arch instead of the `webgl` feature. it shouldn't change much for webgl, but could help if someone wants to target wasm but not webgl2
Co-authored-by: François <8672791+mockersf@users.noreply.github.com>
Dynamic types (`DynamicStruct`, `DynamicTupleStruct`, `DynamicTuple`, `DynamicList` and `DynamicMap`) are used when deserializing scenes, but currently they can only be applied to existing concrete types. This leads to issues when trying to spawn non trivial deserialized scene.
For components, the issue is avoided by requiring that reflected components implement ~~`FromResources`~~ `FromWorld` (or `Default`). When spawning, a new concrete type is created that way, and the dynamic type is applied to it. Unfortunately, some components don't have any valid implementation of these traits.
In addition, any `Vec` or `HashMap` inside a component will panic when a dynamic type is pushed into it (for instance, `Text` panics when adding a text section).
To solve this issue, this PR adds the `FromReflect` trait that creates a concrete type from a dynamic type that represent it, derives the trait alongside the `Reflect` trait, drops the ~~`FromResources`~~ `FromWorld` requirement on reflected components, ~~and enables reflection for UI and Text bundles~~. It also adds the requirement that fields ignored with `#[reflect(ignore)]` implement `Default`, since we need to initialize them somehow.
Co-authored-by: Carter Anderson <mcanders1@gmail.com>
# Objective
- I want to port `bevy_egui` to Bevy main and only reuse re-exports from Bevy
## Solution
- Add exports for `BufferBinding` and `BufferDescriptor`
Co-authored-by: François <8672791+mockersf@users.noreply.github.com>
# Objective
Fixes#3422
## Solution
Adds the existing `Visibility` component to UI bundles and checks for it in the extract phase of the render app.
The `ComputedVisibility` component was not added. I don't think the UI camera needs frustum culling, but having `RenderLayers` work may be desirable. However I think we would need to change `check_visibility()` to differentiate between 2d, 3d and UI entities.
# Objective
- Our crevice is still called "crevice", which we can't use for a release
- Users would need to use our "crevice" directly to be able to use the derive macro
## Solution
- Rename crevice to bevy_crevice, and crevice-derive to bevy-crevice-derive
- Re-export it from bevy_render, and use it from bevy_render everywhere
- Fix derive macro to work either from bevy_render, from bevy_crevice, or from bevy
## Remaining
- It is currently re-exported as `bevy::render::bevy_crevice`, is it the path we want?
- After a brief suggestion to Cart, I changed the version to follow Bevy version instead of crevice, do we want that?
- Crevice README.md need to be updated
- in the `Cargo.toml`, there are a few things to change. How do we want to change them? How do we keep attributions to original Crevice?
```
authors = ["Lucien Greathouse <me@lpghatguy.com>"]
documentation = "https://docs.rs/crevice"
homepage = "https://github.com/LPGhatguy/crevice"
repository = "https://github.com/LPGhatguy/crevice"
```
Co-authored-by: François <8672791+mockersf@users.noreply.github.com>
Co-authored-by: Carter Anderson <mcanders1@gmail.com>
# Objective
Instead of panicking when the `indices` field of a mesh is `None`, actually manage it.
This is just a question of keeping track of the vertex buffer size.
## Notes
* Relying on this change to improve performance on [bevy_debug_lines using the new renderer](https://github.com/Toqozz/bevy_debug_lines/pull/10)
* I'm still new to rendering, my only expertise with wgpu is the learn-wgpu tutorial, likely I'm overlooking something.
### Problem
- shader processing errors are not displayed
- during hot reloading when encountering a shader with errors, the whole app crashes
### Solution
- log `error!`s for shader processing errors
- when `cfg(debug_assertions)` is enabled (i.e. you're running in `debug` mode), parse shaders before passing them to wgpu. This lets us handle errors early.
# Objective
- 3d examples fail to run in webgl2 because of unsupported texture formats or texture too large
## Solution
- switch to supported formats if a feature is enabled. I choose a feature instead of a build target to not conflict with a potential webgpu support
Very inspired by 6813b2edc5, and need #3290 to work.
I named the feature `webgl2`, but it's only needed if one want to use PBR in webgl2. Examples using only 2D already work.
Co-authored-by: François <8672791+mockersf@users.noreply.github.com>
# Objective
- I want to be able to use `#ifdef` and other processor directives in an imported shader
## Solution
- Process imported shader strings
Co-authored-by: François <8672791+mockersf@users.noreply.github.com>
# Objective
Add missing methods to `TrackedRenderPass`
- `set_push_constants`
- `set_viewport`
- `insert_debug_marker`
- `push_debug_group`
- `pop_debug_group`
- `set_blend_constant`
https://docs.rs/wgpu/0.12.0/wgpu/struct.RenderPass.html
I need `set_push_constants` but started adding the others as I noticed they were also missing. The `draw indirect` family of methods are still missing as are the `timestamp query` methods.
# Objective
- Only bevy_render should depend directly on wgpu
- This helps to make sure bevy_render re-exports everything needed from wgpu
## Solution
- Remove bevy_pbr, bevy_sprite and bevy_ui dependency on wgpu
Co-authored-by: François <8672791+mockersf@users.noreply.github.com>
# Objective
Fixes#3352Fixes#3208
## Solution
- Update wgpu to 0.12
- Update naga to 0.8
- Resolve compilation errors
- Remove [[block]] from WGSL shaders (because it is depracated and now wgpu cant parse it)
- Replace `elseif` with `else if` in pbr.wgsl
# Objective
- The multiple windows example which was viciously murdered in #3175.
- cart asked me to
## Solution
- Rework the example to work on pipelined-rendering, based on the work from #2898
# Objective
- There are a few warnings when building Bevy docs for dead links
- CI seems to not catch those warnings when it should
## Solution
- Enable doc CI on all Bevy workspace
- Fix warnings
- Also noticed plugin GilrsPlugin was not added anymore when feature was enabled
First commit to check that CI would actually fail with it: https://github.com/bevyengine/bevy/runs/4532652688?check_suite_focus=true
Co-authored-by: François <8672791+mockersf@users.noreply.github.com>
# Objective And Solution
Add `set_scissor_rect` from wgpu-rs to the `TrackedRenderPass`. wgpu documentation can be found here:
https://docs.rs/wgpu/latest/wgpu/struct.RenderPass.html#method.set_scissor_rect
The reason for adding this is to cull fragments that are outside of the given rect. For my purposes this is extremely useful for UI.
This makes the [New Bevy Renderer](#2535) the default (and only) renderer. The new renderer isn't _quite_ ready for the final release yet, but I want as many people as possible to start testing it so we can identify bugs and address feedback prior to release.
The examples are all ported over and operational with a few exceptions:
* I removed a good portion of the examples in the `shader` folder. We still have some work to do in order to make these examples possible / ergonomic / worthwhile: #3120 and "high level shader material plugins" are the big ones. This is a temporary measure.
* Temporarily removed the multiple_windows example: doing this properly in the new renderer will require the upcoming "render targets" changes. Same goes for the render_to_texture example.
* Removed z_sort_debug: entity visibility sort info is no longer available in app logic. we could do this on the "render app" side, but i dont consider it a priority.
# Objective
- Checks for NaN in computed NDC space coordinates, fixing unexpected NaN in a fallible (`Option<T>`) function.
## Solution
- Adds a NaN check, in addition to the existing NDC bounds checks.
- This is a helper function, and should have no performance impact to the engine itself.
- This will help prevent hard-to-trace NaN propagation in user code, by returning `None` instead of `Some(NaN)`.
Depends on https://github.com/bevyengine/bevy/pull/3269 for CI error fix.
# Objective
Fixes recent pipeline errors:
```
error: use of deprecated associated function `std::array::IntoIter::<T, N>::new`: use `IntoIterator::into_iter` instead
--> crates/bevy_render/src/mesh/mesh.rs:467:54
|
467 | .flat_map(|normal| std::array::IntoIter::new([normal, normal, normal]))
| ^^^
|
= note: `-D deprecated` implied by `-D warnings`
Compiling bevy_render2 v0.5.0 (/home/runner/work/bevy/bevy/pipelined/bevy_render2)
error: use of deprecated associated function `std::array::IntoIter::<T, N>::new`: use `IntoIterator::into_iter` instead
--> pipelined/bevy_render2/src/mesh/mesh/mod.rs:287:54
|
287 | .flat_map(|normal| std::array::IntoIter::new([normal, normal, normal]))
| ^^^
|
= note: `-D deprecated` implied by `-D warnings`
error: could not compile `bevy_render` due to previous error
```
## Solution
- Replaced `IntoIter::new` with `IntoIterator::into_iter`
## Suggestions
For me it looks like two equivalent `Mesh` structs with the same methods. Should we refactor it? Or, they will be different in the near future?
Co-authored-by: CrazyRoka <rokarostuk@gmail.com>
# Objective
- New clippy lints with rust 1.57 are failing
## Solution
- Fixed clippy lints following suggestions
- I ignored clippy in old renderer because there was many and it will be removed soon
# Objective
- Update vendor crevice to have the latest update from crevice 0.8.0
- Using https://github.com/ElectronicRU/crevice/tree/arrays which has the changes to make arrays work
## Solution
- Also updated glam and hexasphere to only have one version of glam
- From the original PR, using crevice to write GLSL code containing arrays would probably not work but it's not something used by Bevy
Objective
During work on #3009 I've found that not all jobs use actions-rs, and therefore, an previous version of Rust is used for them. So while compilation and other stuff can pass, checking markup and Android build may fail with compilation errors.
Solution
This PR adds `action-rs` for any job running cargo, and updates the edition to 2021.
# Objective
The current TODO comment is out of date
## Solution
I switched up the comment
Co-authored-by: William Batista <45850508+billyb2@users.noreply.github.com>
# Objective
The update to wgpu 0.11 broke CI for android. This was due to a confusion between `bevy::render::ShaderStage` and `wgpu::ShaderStage`.
## Solution
Revert the incorrect change
Upgrades both the old and new renderer to wgpu 0.11 (and naga 0.7). This builds on @zicklag's work here #2556.
Co-authored-by: Carter Anderson <mcanders1@gmail.com>
This implements the most minimal variant of #1843 - a derive for marker trait. This is a prerequisite to more complicated features like statically defined storage type or opt-out component reflection.
In order to make component struct's purpose explicit and avoid misuse, it must be annotated with `#[derive(Component)]` (manual impl is discouraged for compatibility). Right now this is just a marker trait, but in the future it might be expanded. Making this change early allows us to make further changes later without breaking backward compatibility for derive macro users.
This already prevents a lot of issues, like using bundles in `insert` calls. Primitive types are no longer valid components as well. This can be easily worked around by adding newtype wrappers and deriving `Component` for them.
One funny example of prevented bad code (from our own tests) is when an newtype struct or enum variant is used. Previously, it was possible to write `insert(Newtype)` instead of `insert(Newtype(value))`. That code compiled, because function pointers (in this case newtype struct constructor) implement `Send + Sync + 'static`, so we allowed them to be used as components. This is no longer the case and such invalid code will trigger a compile error.
Co-authored-by: = <=>
Co-authored-by: TheRawMeatball <therawmeatball@gmail.com>
Co-authored-by: Carter Anderson <mcanders1@gmail.com>
Updates the requirements on [hexasphere](https://github.com/OptimisticPeach/hexasphere) to permit the latest version.
<details>
<summary>Commits</summary>
<ul>
<li>See full diff in <a href="https://github.com/OptimisticPeach/hexasphere/commits">compare view</a></li>
</ul>
</details>
<br />
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
</details>
This updates the `pipelined-rendering` branch to use the latest `bevy_ecs` from `main`. This accomplishes a couple of goals:
1. prepares for upcoming `custom-shaders` branch changes, which were what drove many of the recent bevy_ecs changes on `main`
2. prepares for the soon-to-happen merge of `pipelined-rendering` into `main`. By including bevy_ecs changes now, we make that merge simpler / easier to review.
I split this up into 3 commits:
1. **add upstream bevy_ecs**: please don't bother reviewing this content. it has already received thorough review on `main` and is a literal copy/paste of the relevant folders (the old folders were deleted so the directories are literally exactly the same as `main`).
2. **support manual buffer application in stages**: this is used to enable the Extract step. we've already reviewed this once on the `pipelined-rendering` branch, but its worth looking at one more time in the new context of (1).
3. **support manual archetype updates in QueryState**: same situation as (2).
# Objective
- Make it easy to use HexColorError with `thiserror`, i.e. converting it into other error types.
Makes this possible:
```rust
#[derive(Debug, thiserror::Error)]
pub enum LdtkError {
#[error("An error occured while deserializing")]
Json(#[from] serde_json::Error),
#[error("An error occured while parsing a color")]
HexColor(#[from] bevy::render::color::HexColorError),
}
```
## Solution
- Derive thiserror::Error the same way we do elsewhere (see query.rs for instance)
# Objective
Enable using exact World lifetimes during read-only access . This is motivated by the new renderer's need to allow read-only world-only queries to outlive the query itself (but still be constrained by the world lifetime).
For example:
115b170d1f/pipelined/bevy_pbr2/src/render/mod.rs (L774)
## Solution
Split out SystemParam state and world lifetimes and pipe those lifetimes up to read-only Query ops (and add into_inner for Res). According to every safety test I've run so far (except one), this is safe (see the temporary safety test commit). Note that changing the mutable variants to the new lifetimes would allow aliased mutable pointers (try doing that to see how it affects the temporary safety tests).
The new state lifetime on SystemParam does make `#[derive(SystemParam)]` more cumbersome (the current impl requires PhantomData if you don't use both lifetimes). We can make this better by detecting whether or not a lifetime is used in the derive and adjusting accordingly, but that should probably be done in its own pr.
## Why is this a draft?
The new lifetimes break QuerySet safety in one very specific case (see the query_set system in system_safety_test). We need to solve this before we can use the lifetimes given.
This is due to the fact that QuerySet is just a wrapper over Query, which now relies on world lifetimes instead of `&self` lifetimes to prevent aliasing (but in systems, each Query has its own implied lifetime, not a centralized world lifetime). I believe the fix is to rewrite QuerySet to have its own World lifetime (and own the internal reference). This will complicate the impl a bit, but I think it is doable. I'm curious if anyone else has better ideas.
Personally, I think these new lifetimes need to happen. We've gotta have a way to directly tie read-only World queries to the World lifetime. The new renderer is the first place this has come up, but I doubt it will be the last. Worst case scenario we can come up with a second `WorldLifetimeQuery<Q, F = ()>` parameter to enable these read-only scenarios, but I'd rather not add another type to the type zoo.
# Objective
- Provides more useful error messages when using unsupported shader features.
## Solution Fixes#869
- Provided a error message as follows (adding name, set and binding):
```
Unsupported shader bind type CombinedImageSampler (name noiseVol0, set 0, binding 9)
```
# Objective
- Remove all the `.system()` possible.
- Check for remaining missing cases.
## Solution
- Remove all `.system()`, fix compile errors
- 32 calls to `.system()` remains, mostly internals, the few others should be removed after #2446
This is extracted out of eb8f973646476b4a4926ba644a77e2b3a5772159 and includes some additional changes to remove all references to AppBuilder and fix examples that still used App::build() instead of App::new(). In addition I didn't extract the sub app feature as it isn't ready yet.
You can use `git diff --diff-filter=M eb8f973646476b4a4926ba644a77e2b3a5772159` to find all differences in this PR. The `--diff-filtered=M` filters all files added in the original commit but not in this commit away.
Co-Authored-By: Carter Anderson <mcanders1@gmail.com>
This relicenses Bevy under the dual MIT or Apache-2.0 license. For rationale, see #2373.
* Changes the LICENSE file to describe the dual license. Moved the MIT license to docs/LICENSE-MIT. Added the Apache-2.0 license to docs/LICENSE-APACHE. I opted for this approach over dumping both license files at the root (the more common approach) for a number of reasons:
* Github links to the "first" license file (LICENSE-APACHE) in its license links (you can see this in the wgpu and rust-analyzer repos). People clicking these links might erroneously think that the apache license is the only option. Rust and Amethyst both use COPYRIGHT or COPYING files to solve this problem, but this creates more file noise (if you do everything at the root) and the naming feels way less intuitive.
* People have a reflex to look for a LICENSE file. By providing a single license file at the root, we make it easy for them to understand our licensing approach.
* I like keeping the root clean and noise free
* There is precedent for putting the apache and mit license text in sub folders (amethyst)
* Removed the `Copyright (c) 2020 Carter Anderson` copyright notice from the MIT license. I don't care about this attribution, it might make license compliance more difficult in some cases, and it didn't properly attribute other contributors. We shoudn't replace it with something like "Copyright (c) 2021 Bevy Contributors" because "Bevy Contributors" is not a legal entity. Instead, we just won't include the copyright line (which has precedent ... Rust also uses this approach).
* Updates crates to use the new "MIT OR Apache-2.0" license value
* Removes the old legion-transform license file from bevy_transform. bevy_transform has been its own, fully custom implementation for a long time and that license no longer applies.
* Added a License section to the main readme
* Updated our Bevy Plugin licensing guidelines.
As a follow-up we should update the website to properly describe the new license.
Closes#2373
This was tested using cargo generate-lockfile -Zminimal-versions.
The following indirect dependencies also have minimal version
dependencies. For at least num, rustc-serialize and rand this is
necessary to compile on rustc versions that are not older than 1.0.
* num = "0.1.27"
* rustc-serialize = "0.3.20"
* termcolor = "1.0.4"
* libudev-sys = "0.1.1"
* rand = "0.3.14"
* ab_glyph = "0.2.7
Based on https://github.com/bevyengine/bevy/pull/2455
# Objective
Reduce compilation time
# Solution
Remove unused dependencies. While this PR doesn't remove any crates from `Cargo.lock`, it may unlock more build parallelism.
# Objective
Fixes how the layer bit is unset in the RenderLayers bit mask when calling the `without` method.
## Solution
Unsets the layer bit using `&=` and the inverse of the layer bit mask.
# Objective
- CI jobs are starting to fail due to `clippy::bool-assert-comparison` and `clippy::single_component_path_imports` being triggered.
## Solution
- Fix all uses where `asset_eq!(<condition>, <bool>)` could be replace by `assert!`
- Move the `#[allow()]` for `single_component_path_imports` to `#![allow()]` at the start of the files.
fixes#2169
Instead of having custom methods with reduced visibility, implement `From<image::DynamicImage> for Texture` and `TryFrom<Texture> for image::DynamicImage`
Since `visible_entities_system` already checks `Visiblie::is_visible` for each entity and requires it to be `true`, there's no reason to verify visibility in `PassNode::prepare` which consumes entities produced by the system.
When implementing `AssetLoader ` you need to specify which File extensions are supported by that loader.
Currently, Bevy always says it supports extensions that actually require activating a Feature beforehand.
This PR adds cf attributes, so Bevy only tries to load those Extensions whose Features were activated.
This prevents Bevy from Panicking and reports such a warning:
```
Jun 02 23:05:57.139 WARN bevy_asset::asset_server: no `AssetLoader` found for the following extension: ogg
```
This also fixes the Bug, that the `png Feature had to be activated even if you wanted to load a different image format.
Fixes#640
`ResMut`, `Mut` and `ReflectMut` all share very similar code for change detection.
This PR is a first pass at refactoring these implementation and removing a lot of the duplicated code.
Note, this introduces a new trait `ChangeDetectable`.
Please feel free to comment away and let me know what you think!
This gets rid of multiple unsafe blocks that we had to maintain ourselves, and instead depends on library that's commonly used and supported by the ecosystem. We also get support for glam types for free.
There is still some things to clear up with the `Bytes` trait, but that is a bit more substantial change and can be done separately. Also there are already separate efforts to use `crevice` crate, so I've just added that as a TODO.
There's what might be considered a proper bug in `PipelineCompiler::compile_pipeline()`, where it overwrites the `step_mode` for the passed in `VertexBufferLayout` with `InputStepMode::Vertex`. Due to this some ugly workarounds are needed to do any kind of instancing.
In the somewhat longer term, `PipelineCompiler::compile_pipeline()` should probably also handle a `Vec<VertexBufferLayout>`, but that would be a (slightly) larger PR, rather than a bugfix. And I'd love to have this fix in sooner than we can deal with a bigger PR.
Co-authored-by: Carter Anderson <mcanders1@gmail.com>
Required by #1429,
- Adds the `Ushort4` vertex attribute for joint indices
- `Mesh::ATTRIBUTE_JOINT_WEIGHT` and `Mesh::ATTRIBUTE_JOINT_INDEX` to import vertex attributes related to skinning from GLTF
- impl `Default` for `Mesh` a empty triangle mesh is created (needed by reflect)
- impl `Reflect` for `Mesh` all attributes are ignored (needed by the animation system)
Changes to get Bevy to compile with wgpu master.
With this, on a Mac:
* 2d examples look fine
* ~~3d examples crash with an error specific to metal about a compilation error~~
* 3d examples work fine after enabling feature `wgpu/cross`
Feature `wgpu/cross` seems to be needed only on some platforms, not sure how to know which. It was introduced in https://github.com/gfx-rs/wgpu-rs/pull/826
Fixes#2037 (and then some)
Problem:
- `TypeUuid`, `RenderResource`, and `Bytes` derive macros did not properly handle generic structs.
Solution:
- Rework the derive macro implementations to handle the generics.
If a mesh without any vertex attributes is rendered (for example, one that only has indices), bevy will crash since the mesh still creates a vertex buffer even though it's empty. Later code assumes that there is vertex data, causing an index-out-of-bounds panic. This PR fixes the issue by adding a check that there is any vertex data before creating a vertex buffer.
I ran into this issue while rendering a tilemap without any vertex attributes (only indices).
Stack trace:
```
thread 'main' panicked at 'index out of bounds: the len is 0 but the index is 0', C:\Dev\Games\bevy\crates\bevy_render\src\render_graph\nodes\pass_node.rs:346:9
stack backtrace:
0: std::panicking::begin_panic_handler
at /rustc/bb491ed23937aef876622e4beb68ae95938b3bf9\/library\std\src\panicking.rs:493
1: core::panicking::panic_fmt
at /rustc/bb491ed23937aef876622e4beb68ae95938b3bf9\/library\core\src\panicking.rs:92
2: core::panicking::panic_bounds_check
at /rustc/bb491ed23937aef876622e4beb68ae95938b3bf9\/library\core\src\panicking.rs:69
3: core::slice::index::{{impl}}::index<core::option::Option<tuple<bevy_render::renderer::render_resource::buffer::BufferId, u64>>>
at C:\Users\tehpe\.rustup\toolchains\nightly-x86_64-pc-windows-msvc\lib\rustlib\src\rust\library\core\src\slice\index.rs:184
4: core::slice::index::{{impl}}::index<core::option::Option<tuple<bevy_render::renderer::render_resource::buffer::BufferId, u64>>,usize>
at C:\Users\tehpe\.rustup\toolchains\nightly-x86_64-pc-windows-msvc\lib\rustlib\src\rust\library\core\src\slice\index.rs:15
5: alloc::vec::{{impl}}::index<core::option::Option<tuple<bevy_render::renderer::render_resource::buffer::BufferId, u64>>,usize,alloc::alloc::Global>
at C:\Users\tehpe\.rustup\toolchains\nightly-x86_64-pc-windows-msvc\lib\rustlib\src\rust\library\alloc\src\vec\mod.rs:2386
6: bevy_render::render_graph::nodes::pass_node::DrawState::is_vertex_buffer_set
at C:\Dev\Games\bevy\crates\bevy_render\src\render_graph\nodes\pass_node.rs:346
7: bevy_render::render_graph::nodes::pass_node::{{impl}}::update::{{closure}}<bevy_render::render_graph::base::MainPass*>
at C:\Dev\Games\bevy\crates\bevy_render\src\render_graph\nodes\pass_node.rs:285
8: bevy_wgpu::renderer::wgpu_render_context::{{impl}}::begin_pass
at C:\Dev\Games\bevy\crates\bevy_wgpu\src\renderer\wgpu_render_context.rs:196
9: bevy_render::render_graph::nodes::pass_node::{{impl}}::update<bevy_render::render_graph::base::MainPass*>
at C:\Dev\Games\bevy\crates\bevy_render\src\render_graph\nodes\pass_node.rs:244
10: bevy_wgpu::renderer::wgpu_render_graph_executor::WgpuRenderGraphExecutor::execute
at C:\Dev\Games\bevy\crates\bevy_wgpu\src\renderer\wgpu_render_graph_executor.rs:75
11: bevy_wgpu::wgpu_renderer::{{impl}}::run_graph::{{closure}}
at C:\Dev\Games\bevy\crates\bevy_wgpu\src\wgpu_renderer.rs:115
12: bevy_ecs::world::World::resource_scope<bevy_render::render_graph::graph::RenderGraph,tuple<>,closure-0>
at C:\Dev\Games\bevy\crates\bevy_ecs\src\world\mod.rs:715
13: bevy_wgpu::wgpu_renderer::WgpuRenderer::run_graph
at C:\Dev\Games\bevy\crates\bevy_wgpu\src\wgpu_renderer.rs:104
14: bevy_wgpu::wgpu_renderer::WgpuRenderer::update
at C:\Dev\Games\bevy\crates\bevy_wgpu\src\wgpu_renderer.rs:121
15: bevy_wgpu::get_wgpu_render_system::{{closure}}
at C:\Dev\Games\bevy\crates\bevy_wgpu\src\lib.rs:112
16: alloc::boxed::{{impl}}::call_mut<tuple<mut bevy_ecs::world::World*>,FnMut<tuple<mut bevy_ecs::world::World*>>,alloc::alloc::Global>
at C:\Users\tehpe\.rustup\toolchains\nightly-x86_64-pc-windows-msvc\lib\rustlib\src\rust\library\alloc\src\boxed.rs:1553
17: bevy_ecs::system::exclusive_system::{{impl}}::run
at C:\Dev\Games\bevy\crates\bevy_ecs\src\system\exclusive_system.rs:41
18: bevy_ecs::schedule::stage::{{impl}}::run
at C:\Dev\Games\bevy\crates\bevy_ecs\src\schedule\stage.rs:812
19: bevy_ecs::schedule::Schedule::run_once
at C:\Dev\Games\bevy\crates\bevy_ecs\src\schedule\mod.rs:201
20: bevy_ecs::schedule::{{impl}}::run
at C:\Dev\Games\bevy\crates\bevy_ecs\src\schedule\mod.rs:219
21: bevy_app::app::App::update
at C:\Dev\Games\bevy\crates\bevy_app\src\app.rs:58
22: bevy_winit::winit_runner_with::{{closure}}
at C:\Dev\Games\bevy\crates\bevy_winit\src\lib.rs:485
23: winit::platform_impl::platform::event_loop::{{impl}}::run_return::{{closure}}<tuple<>,closure-1>
at C:\Users\tehpe\.cargo\registry\src\github.com-1ecc6299db9ec823\winit-0.24.0\src\platform_impl\windows\event_loop.rs:203
24: alloc::boxed::{{impl}}::call_mut<tuple<winit::event::Event<tuple<>>, mut winit::event_loop::ControlFlow*>,FnMut<tuple<winit::event::Event<tuple<>>, mut winit::event_loop::ControlFlow*>>,alloc::alloc::Global>
at C:\Users\tehpe\.rustup\toolchains\nightly-x86_64-pc-windows-msvc\lib\rustlib\src\rust\library\alloc\src\boxed.rs:1553
25: winit::platform_impl::platform::event_loop:🏃:{{impl}}::call_event_handler::{{closure}}<tuple<>>
at C:\Users\tehpe\.cargo\registry\src\github.com-1ecc6299db9ec823\winit-0.24.0\src\platform_impl\windows\event_loop\runner.rs:245
26: std::panic::{{impl}}::call_once<tuple<>,closure-0>
at C:\Users\tehpe\.rustup\toolchains\nightly-x86_64-pc-windows-msvc\lib\rustlib\src\rust\library\std\src\panic.rs:344
27: std::panicking::try::do_call<std::panic::AssertUnwindSafe<closure-0>,tuple<>>
at C:\Users\tehpe\.rustup\toolchains\nightly-x86_64-pc-windows-msvc\lib\rustlib\src\rust\library\std\src\panicking.rs:379
28: hashbrown::set::HashSet<mut winapi::shared::windef::HWND__*, std::collections:#️⃣:map::RandomState, alloc::alloc::Global>::iter<mut winapi::shared::windef::HWND__*,std::collections:#️⃣:map::RandomState,alloc::alloc::Global>
29: std::panicking::try<tuple<>,std::panic::AssertUnwindSafe<closure-0>>
at C:\Users\tehpe\.rustup\toolchains\nightly-x86_64-pc-windows-msvc\lib\rustlib\src\rust\library\std\src\panicking.rs:343
30: std::panic::catch_unwind<std::panic::AssertUnwindSafe<closure-0>,tuple<>>
at C:\Users\tehpe\.rustup\toolchains\nightly-x86_64-pc-windows-msvc\lib\rustlib\src\rust\library\std\src\panic.rs:431
31: winit::platform_impl::platform::event_loop:🏃:EventLoopRunner<tuple<>>::catch_unwind<tuple<>,tuple<>,closure-0>
at C:\Users\tehpe\.cargo\registry\src\github.com-1ecc6299db9ec823\winit-0.24.0\src\platform_impl\windows\event_loop\runner.rs:152
32: winit::platform_impl::platform::event_loop:🏃:EventLoopRunner<tuple<>>::call_event_handler<tuple<>>
at C:\Users\tehpe\.cargo\registry\src\github.com-1ecc6299db9ec823\winit-0.24.0\src\platform_impl\windows\event_loop\runner.rs:239
33: winit::platform_impl::platform::event_loop:🏃:EventLoopRunner<tuple<>>::move_state_to<tuple<>>
at C:\Users\tehpe\.cargo\registry\src\github.com-1ecc6299db9ec823\winit-0.24.0\src\platform_impl\windows\event_loop\runner.rs:341
34: winit::platform_impl::platform::event_loop:🏃:EventLoopRunner<tuple<>>::main_events_cleared<tuple<>>
at C:\Users\tehpe\.cargo\registry\src\github.com-1ecc6299db9ec823\winit-0.24.0\src\platform_impl\windows\event_loop\runner.rs:227
35: winit::platform_impl::platform::event_loop::flush_paint_messages<tuple<>>
at C:\Users\tehpe\.cargo\registry\src\github.com-1ecc6299db9ec823\winit-0.24.0\src\platform_impl\windows\event_loop.rs:676
36: winit::platform_impl::platform::event_loop::thread_event_target_callback::{{closure}}<tuple<>>
at C:\Users\tehpe\.cargo\registry\src\github.com-1ecc6299db9ec823\winit-0.24.0\src\platform_impl\windows\event_loop.rs:1967
37: std::panic::{{impl}}::call_once<isize,closure-0>
at C:\Users\tehpe\.rustup\toolchains\nightly-x86_64-pc-windows-msvc\lib\rustlib\src\rust\library\std\src\panic.rs:344
38: std::panicking::try::do_call<std::panic::AssertUnwindSafe<closure-0>,isize>
at C:\Users\tehpe\.rustup\toolchains\nightly-x86_64-pc-windows-msvc\lib\rustlib\src\rust\library\std\src\panicking.rs:379
39: hashbrown::set::HashSet<mut winapi::shared::windef::HWND__*, std::collections:#️⃣:map::RandomState, alloc::alloc::Global>::iter<mut winapi::shared::windef::HWND__*,std::collections:#️⃣:map::RandomState,alloc::alloc::Global>
40: std::panicking::try<isize,std::panic::AssertUnwindSafe<closure-0>>
at C:\Users\tehpe\.rustup\toolchains\nightly-x86_64-pc-windows-msvc\lib\rustlib\src\rust\library\std\src\panicking.rs:343
41: std::panic::catch_unwind<std::panic::AssertUnwindSafe<closure-0>,isize>
at C:\Users\tehpe\.rustup\toolchains\nightly-x86_64-pc-windows-msvc\lib\rustlib\src\rust\library\std\src\panic.rs:431
42: winit::platform_impl::platform::event_loop:🏃:EventLoopRunner<tuple<>>::catch_unwind<tuple<>,isize,closure-0>
at C:\Users\tehpe\.cargo\registry\src\github.com-1ecc6299db9ec823\winit-0.24.0\src\platform_impl\windows\event_loop\runner.rs:152
43: winit::platform_impl::platform::event_loop::thread_event_target_callback<tuple<>>
at C:\Users\tehpe\.cargo\registry\src\github.com-1ecc6299db9ec823\winit-0.24.0\src\platform_impl\windows\event_loop.rs:2151
44: DefSubclassProc
45: DefSubclassProc
46: CallWindowProcW
47: DispatchMessageW
48: SendMessageTimeoutW
49: KiUserCallbackDispatcher
50: NtUserDispatchMessage
51: DispatchMessageW
52: winit::platform_impl::platform::event_loop::EventLoop<tuple<>>::run_return<tuple<>,closure-1>
at C:\Users\tehpe\.cargo\registry\src\github.com-1ecc6299db9ec823\winit-0.24.0\src\platform_impl\windows\event_loop.rs:218
53: winit::platform_impl::platform::event_loop::EventLoop<tuple<>>::run<tuple<>,closure-1>
at C:\Users\tehpe\.cargo\registry\src\github.com-1ecc6299db9ec823\winit-0.24.0\src\platform_impl\windows\event_loop.rs:188
54: winit::event_loop::EventLoop<tuple<>>::run<tuple<>,closure-1>
at C:\Users\tehpe\.cargo\registry\src\github.com-1ecc6299db9ec823\winit-0.24.0\src\event_loop.rs:154
55: bevy_winit::run<closure-1>
at C:\Dev\Games\bevy\crates\bevy_winit\src\lib.rs:171
56: bevy_winit::winit_runner_with
at C:\Dev\Games\bevy\crates\bevy_winit\src\lib.rs:493
57: bevy_winit::winit_runner
at C:\Dev\Games\bevy\crates\bevy_winit\src\lib.rs:211
58: core::ops::function::Fn::call<fn(bevy_app::app::App),tuple<bevy_app::app::App>>
at C:\Users\tehpe\.rustup\toolchains\nightly-x86_64-pc-windows-msvc\lib\rustlib\src\rust\library\core\src\ops\function.rs:70
59: alloc::boxed::{{impl}}::call<tuple<bevy_app::app::App>,Fn<tuple<bevy_app::app::App>>,alloc::alloc::Global>
at C:\Users\tehpe\.rustup\toolchains\nightly-x86_64-pc-windows-msvc\lib\rustlib\src\rust\library\alloc\src\boxed.rs:1560
60: bevy_app::app::App::run
at C:\Dev\Games\bevy\crates\bevy_app\src\app.rs:68
61: bevy_app::app_builder::AppBuilder::run
at C:\Dev\Games\bevy\crates\bevy_app\src\app_builder.rs:54
62: game_main::main
at .\crates\game_main\src\main.rs:23
63: core::ops::function::FnOnce::call_once<fn(),tuple<>>
at C:\Users\tehpe\.rustup\toolchains\nightly-x86_64-pc-windows-msvc\lib\rustlib\src\rust\library\core\src\ops\function.rs:227
note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace.
Apr 27 21:51:01.026 ERROR gpu_descriptor::allocator: `DescriptorAllocator` is dropped while some descriptor sets were not deallocated
error: process didn't exit successfully: `target/cargo\debug\game_main.exe` (exit code: 0xc000041d)
```
There are cases where we want an enum variant name. Right now the only way to do that with rust's std is to derive Debug, but this will also print out the variant's fields. This creates the unfortunate situation where we need to manually write out each variant's string name (ex: in #1963), which is both boilerplate-ey and error-prone. Crates such as `strum` exist for this reason, but it includes a lot of code and complexity that we don't need.
This adds a dead-simple `EnumVariantMeta` derive that exposes `enum_variant_index` and `enum_variant_name` functions. This allows us to make cases like #1963 much cleaner (see the second commit). We might also be able to reuse this logic for `bevy_reflect` enum derives.
In bevy_webgl2, the `RenderResourceContext` is created after startup as it needs to first wait for an event from js side:
f31e5d49de/src/lib.rs (L117)
remove `panic` introduced in #1965 and log as a `warn` instead
This implementations allows you
convert std::vec::Vec<T> to VertexAttributeValues::T and back.
# Examples
```rust
use std::convert::TryInto;
use bevy_render::mesh::VertexAttributeValues;
// creating vector of values
let before = vec![[0_u32; 4]; 10];
let values = VertexAttributeValues::from(before.clone());
let after: Vec<[u32; 4]> = values.try_into().unwrap();
assert_eq!(before, after);
```
Co-authored-by: aloucks <aloucks@cofront.net>
Co-authored-by: simens_green <34134129+simensgreen@users.noreply.github.com>
Co-authored-by: Carter Anderson <mcanders1@gmail.com>
Allows render resources to move data to the heap by boxing them. I did this as a workaround to #1892, but it seems like it'd be useful regardless. If not, feel free to close this PR.
Implements `Byteable` and `RenderResource` for any array containing `Byteable` elements. This allows `RenderResources` to be implemented on structs with arbitrarily-sized arrays, among other things:
```rust
#[derive(RenderResources, TypeUuid)]
#[uuid = "2733ff34-8f95-459f-bf04-3274e686ac5f"]
struct Foo {
buffer: [i32; 256],
}
```
Fixes#1809. It makes it also possible to use `derive` for `SystemParam` inside ECS and avoid manual implementation. An alternative solution to macro changes is to use `use crate as bevy_ecs;` in `event.rs`.
The `VertexBufferLayout` returned by `crates\bevy_render\src\mesh\mesh.rs:308` was unstable, because `HashMap.iter()` has a random order. This caused the pipeline_compiler to wrongly consider a specialization to be different (`crates\bevy_render\src\pipeline\pipeline_compiler.rs:123`), causing each mesh changed event to potentially result in a different `PipelineSpecialization`. This in turn caused `Draw` to emit a `set_pipeline` much more often than needed.
This fix shaves off a `BindPipeline` and two `BindDescriptorSets` (for the Camera and for global renderresources) for every mesh after the first that can now use the same specialization, where it didn't before (which was random).
`StableHashMap` was not a good replacement, because it isn't `Clone`, so instead I replaced it with a `BTreeMap` which is OK in this instance, because there shouldn't be many insertions on `Mesh.attributes` after the mesh is created.
- prints glsl compile error message in multiple lines instead of `thread 'main' panicked at 'called Result::unwrap() on an Err value: Compilation("glslang_shader_parse:\nInfo log:\nERROR: 0:335: \'assign\' : l-value required \"anon@7\" (can\'t modify a uniform)\nERROR: 0:335: \'\' : compilation terminated \nERROR: 2 compilation errors. No code generated.\n\n\nDebug log:\n\n")', crates/bevy_render/src/pipeline/pipeline_compiler.rs:161:22`
- makes gltf error messages have more context
New error:
```rust
thread 'Compute Task Pool (5)' panicked at 'Shader compilation error:
glslang_shader_parse:
Info log:
ERROR: 0:12: 'assign' : l-value required "anon@1" (can't modify a uniform)
ERROR: 0:12: '' : compilation terminated
ERROR: 2 compilation errors. No code generated.
', crates/bevy_render/src/pipeline/pipeline_compiler.rs:364:5
```
These changes are a bit unrelated. I can open separate PRs if someone wants that.
After #1697 I looked at all other Iterators from Bevy and added overrides for `size_hint` where it wasn't done.
Also implemented `ExactSizeIterator` where applicable.
In shaders, `vec3` should be avoided for `std140` layout, as they take the size of a `vec4` and won't support manual padding by adding an additional `float`.
This change is needed for 3D to work in WebGL2. With it, I get PBR to render
<img width="1407" alt="Screenshot 2021-04-02 at 02 57 14" src="https://user-images.githubusercontent.com/8672791/113368551-5a3c2780-935f-11eb-8c8d-e9ba65b5ee98.png">
Without it, nothing renders... @cart Could this be considered for 0.5 release?
Also, I learned shaders 2 days ago, so don't hesitate to correct any issue or misunderstanding I may have
bevy_webgl2 PR in progress for Bevy 0.5 is here if you want to test: https://github.com/rparrett/bevy_webgl2/pull/1
I think [collection, thing_removed_from_collection] is a more natural order than [thing_removed_from_collection, collection]. Just a small tweak that I think we should include in 0.5.
This PR adds normal maps on top of PBR #1554. Once that PR lands, the changes should look simpler.
Edit: Turned out to be so little extra work, I added metallic/roughness texture too. And occlusion and emissive.
Co-authored-by: Carter Anderson <mcanders1@gmail.com>
This PR adds two systems to the sprite module that culls Sprites and AtlasSprites that are not within the camera's view.
This is achieved by removing / adding a new `Viewable` Component dynamically.
Some of the render queries now use a `With<Viewable>` filter to only process the sprites that are actually on screen, which improves performance drastically for scene swith a large amount of sprites off-screen.
https://streamable.com/vvzh2u
This scene shows a map with a 320x320 tiles, with a grid size of 64p.
This is exactly 102400 Sprites in the entire scene.
Without this PR, this scene runs with 1 to 4 FPS.
With this PR..
.. at 720p, there are around 600 visible sprites and runs at ~215 FPS
.. at 1440p there are around 2000 visible sprites and runs at ~135 FPS
The Systems this PR adds take around 1.2ms (with 100K+ sprites in the scene)
Note:
This is only implemented for Sprites and AtlasTextureSprites.
There is no culling for 3D in this PR.
Co-authored-by: Carter Anderson <mcanders1@gmail.com>
This is a rebase of StarArawns PBR work from #261 with IngmarBitters work from #1160 cherry-picked on top.
I had to make a few minor changes to make some intermediate commits compile and the end result is not yet 100% what I expected, so there's a bit more work to do.
Co-authored-by: John Mitchell <toasterthegamer@gmail.com>
Co-authored-by: Ingmar Bitter <ingmar.bitter@gmail.com>
Alternative to #1203 and #1611
Camera bindings have historically been "hacked in". They were _required_ in all shaders and only supported a single Mat4. PBR (#1554) requires the CameraView matrix, but adding this using the "hacked" method forced users to either include all possible camera data in a single binding (#1203) or include all possible bindings (#1611).
This approach instead assigns each "active camera" its own RenderResourceBindings, which are populated by CameraNode. The PassNode then retrieves (and initializes) the relevant bind groups for all render pipelines used by visible entities.
* Enables any number of camera bindings , including zero (with any set or binding number ... set 0 should still be used to avoid rebinds).
* Renames Camera binding to CameraViewProj
* Adds CameraView binding
# Problem Definition
The current change tracking (via flags for both components and resources) fails to detect changes made by systems that are scheduled to run earlier in the frame than they are.
This issue is discussed at length in [#68](https://github.com/bevyengine/bevy/issues/68) and [#54](https://github.com/bevyengine/bevy/issues/54).
This is very much a draft PR, and contributions are welcome and needed.
# Criteria
1. Each change is detected at least once, no matter the ordering.
2. Each change is detected at most once, no matter the ordering.
3. Changes should be detected the same frame that they are made.
4. Competitive ergonomics. Ideally does not require opting-in.
5. Low CPU overhead of computation.
6. Memory efficient. This must not increase over time, except where the number of entities / resources does.
7. Changes should not be lost for systems that don't run.
8. A frame needs to act as a pure function. Given the same set of entities / components it needs to produce the same end state without side-effects.
**Exact** change-tracking proposals satisfy criteria 1 and 2.
**Conservative** change-tracking proposals satisfy criteria 1 but not 2.
**Flaky** change tracking proposals satisfy criteria 2 but not 1.
# Code Base Navigation
There are three types of flags:
- `Added`: A piece of data was added to an entity / `Resources`.
- `Mutated`: A piece of data was able to be modified, because its `DerefMut` was accessed
- `Changed`: The bitwise OR of `Added` and `Changed`
The special behavior of `ChangedRes`, with respect to the scheduler is being removed in [#1313](https://github.com/bevyengine/bevy/pull/1313) and does not need to be reproduced.
`ChangedRes` and friends can be found in "bevy_ecs/core/resources/resource_query.rs".
The `Flags` trait for Components can be found in "bevy_ecs/core/query.rs".
`ComponentFlags` are stored in "bevy_ecs/core/archetypes.rs", defined on line 446.
# Proposals
**Proposal 5 was selected for implementation.**
## Proposal 0: No Change Detection
The baseline, where computations are performed on everything regardless of whether it changed.
**Type:** Conservative
**Pros:**
- already implemented
- will never miss events
- no overhead
**Cons:**
- tons of repeated work
- doesn't allow users to avoid repeating work (or monitoring for other changes)
## Proposal 1: Earlier-This-Tick Change Detection
The current approach as of Bevy 0.4. Flags are set, and then flushed at the end of each frame.
**Type:** Flaky
**Pros:**
- already implemented
- simple to understand
- low memory overhead (2 bits per component)
- low time overhead (clear every flag once per frame)
**Cons:**
- misses systems based on ordering
- systems that don't run every frame miss changes
- duplicates detection when looping
- can lead to unresolvable circular dependencies
## Proposal 2: Two-Tick Change Detection
Flags persist for two frames, using a double-buffer system identical to that used in events.
A change is observed if it is found in either the current frame's list of changes or the previous frame's.
**Type:** Conservative
**Pros:**
- easy to understand
- easy to implement
- low memory overhead (4 bits per component)
- low time overhead (bit mask and shift every flag once per frame)
**Cons:**
- can result in a great deal of duplicated work
- systems that don't run every frame miss changes
- duplicates detection when looping
## Proposal 3: Last-Tick Change Detection
Flags persist for two frames, using a double-buffer system identical to that used in events.
A change is observed if it is found in the previous frame's list of changes.
**Type:** Exact
**Pros:**
- exact
- easy to understand
- easy to implement
- low memory overhead (4 bits per component)
- low time overhead (bit mask and shift every flag once per frame)
**Cons:**
- change detection is always delayed, possibly causing painful chained delays
- systems that don't run every frame miss changes
- duplicates detection when looping
## Proposal 4: Flag-Doubling Change Detection
Combine Proposal 2 and Proposal 3. Differentiate between `JustChanged` (current behavior) and `Changed` (Proposal 3).
Pack this data into the flags according to [this implementation proposal](https://github.com/bevyengine/bevy/issues/68#issuecomment-769174804).
**Type:** Flaky + Exact
**Pros:**
- allows users to acc
- easy to implement
- low memory overhead (4 bits per component)
- low time overhead (bit mask and shift every flag once per frame)
**Cons:**
- users must specify the type of change detection required
- still quite fragile to system ordering effects when using the flaky `JustChanged` form
- cannot get immediate + exact results
- systems that don't run every frame miss changes
- duplicates detection when looping
## [SELECTED] Proposal 5: Generation-Counter Change Detection
A global counter is increased after each system is run. Each component saves the time of last mutation, and each system saves the time of last execution. Mutation is detected when the component's counter is greater than the system's counter. Discussed [here](https://github.com/bevyengine/bevy/issues/68#issuecomment-769174804). How to handle addition detection is unsolved; the current proposal is to use the highest bit of the counter as in proposal 1.
**Type:** Exact (for mutations), flaky (for additions)
**Pros:**
- low time overhead (set component counter on access, set system counter after execution)
- robust to systems that don't run every frame
- robust to systems that loop
**Cons:**
- moderately complex implementation
- must be modified as systems are inserted dynamically
- medium memory overhead (4 bytes per component + system)
- unsolved addition detection
## Proposal 6: System-Data Change Detection
For each system, track which system's changes it has seen. This approach is only worth fully designing and implementing if Proposal 5 fails in some way.
**Type:** Exact
**Pros:**
- exact
- conceptually simple
**Cons:**
- requires storing data on each system
- implementation is complex
- must be modified as systems are inserted dynamically
## Proposal 7: Total-Order Change Detection
Discussed [here](https://github.com/bevyengine/bevy/issues/68#issuecomment-754326523). This proposal is somewhat complicated by the new scheduler, but I believe it should still be conceptually feasible. This approach is only worth fully designing and implementing if Proposal 5 fails in some way.
**Type:** Exact
**Pros:**
- exact
- efficient data storage relative to other exact proposals
**Cons:**
- requires access to the scheduler
- complex implementation and difficulty grokking
- must be modified as systems are inserted dynamically
# Tests
- We will need to verify properties 1, 2, 3, 7 and 8. Priority: 1 > 2 = 3 > 8 > 7
- Ideally we can use identical user-facing syntax for all proposals, allowing us to re-use the same syntax for each.
- When writing tests, we need to carefully specify order using explicit dependencies.
- These tests will need to be duplicated for both components and resources.
- We need to be sure to handle cases where ambiguous system orders exist.
`changing_system` is always the system that makes the changes, and `detecting_system` always detects the changes.
The component / resource changed will be simple boolean wrapper structs.
## Basic Added / Mutated / Changed
2 x 3 design:
- Resources vs. Components
- Added vs. Changed vs. Mutated
- `changing_system` runs before `detecting_system`
- verify at the end of tick 2
## At Least Once
2 x 3 design:
- Resources vs. Components
- Added vs. Changed vs. Mutated
- `changing_system` runs after `detecting_system`
- verify at the end of tick 2
## At Most Once
2 x 3 design:
- Resources vs. Components
- Added vs. Changed vs. Mutated
- `changing_system` runs once before `detecting_system`
- increment a counter based on the number of changes detected
- verify at the end of tick 2
## Fast Detection
2 x 3 design:
- Resources vs. Components
- Added vs. Changed vs. Mutated
- `changing_system` runs before `detecting_system`
- verify at the end of tick 1
## Ambiguous System Ordering Robustness
2 x 3 x 2 design:
- Resources vs. Components
- Added vs. Changed vs. Mutated
- `changing_system` runs [before/after] `detecting_system` in tick 1
- `changing_system` runs [after/before] `detecting_system` in tick 2
## System Pausing
2 x 3 design:
- Resources vs. Components
- Added vs. Changed vs. Mutated
- `changing_system` runs in tick 1, then is disabled by run criteria
- `detecting_system` is disabled by run criteria until it is run once during tick 3
- verify at the end of tick 3
## Addition Causes Mutation
2 design:
- Resources vs. Components
- `adding_system_1` adds a component / resource
- `adding system_2` adds the same component / resource
- verify the `Mutated` flag at the end of the tick
- verify the `Added` flag at the end of the tick
First check tests for: https://github.com/bevyengine/bevy/issues/333
Second check tests for: https://github.com/bevyengine/bevy/issues/1443
## Changes Made By Commands
- `adding_system` runs in Update in tick 1, and sends a command to add a component
- `detecting_system` runs in Update in tick 1 and 2, after `adding_system`
- We can't detect the changes in tick 1, since they haven't been processed yet
- If we were to track these changes as being emitted by `adding_system`, we can't detect the changes in tick 2 either, since `detecting_system` has already run once after `adding_system` :(
# Benchmarks
See: [general advice](https://github.com/bevyengine/bevy/blob/master/docs/profiling.md), [Criterion crate](https://github.com/bheisler/criterion.rs)
There are several critical parameters to vary:
1. entity count (1 to 10^9)
2. fraction of entities that are changed (0% to 100%)
3. cost to perform work on changed entities, i.e. workload (1 ns to 1s)
1 and 2 should be varied between benchmark runs. 3 can be added on computationally.
We want to measure:
- memory cost
- run time
We should collect these measurements across several frames (100?) to reduce bootup effects and accurately measure the mean, variance and drift.
Entity-component change detection is much more important to benchmark than resource change detection, due to the orders of magnitude higher number of pieces of data.
No change detection at all should be included in benchmarks as a second control for cases where missing changes is unacceptable.
## Graphs
1. y: performance, x: log_10(entity count), color: proposal, facet: performance metric. Set cost to perform work to 0.
2. y: run time, x: cost to perform work, color: proposal, facet: fraction changed. Set number of entities to 10^6
3. y: memory, x: frames, color: proposal
# Conclusions
1. Is the theoretical categorization of the proposals correct according to our tests?
2. How does the performance of the proposals compare without any load?
3. How does the performance of the proposals compare with realistic loads?
4. At what workload does more exact change tracking become worth the (presumably) higher overhead?
5. When does adding change-detection to save on work become worthwhile?
6. Is there enough divergence in performance between the best solutions in each class to ship more than one change-tracking solution?
# Implementation Plan
1. Write a test suite.
2. Verify that tests fail for existing approach.
3. Write a benchmark suite.
4. Get performance numbers for existing approach.
5. Implement, test and benchmark various solutions using a Git branch per proposal.
6. Create a draft PR with all solutions and present results to team.
7. Select a solution and replace existing change detection.
Co-authored-by: Brice DAVIER <bricedavier@gmail.com>
Co-authored-by: Carter Anderson <mcanders1@gmail.com>
`Color` can now be from different color spaces or representation:
- sRGB
- linear RGB
- HSL
This fixes#1193 by allowing the creation of const colors of all types, and writing it to the linear RGB color space for rendering.
I went with an enum after trying with two different types (`Color` and `LinearColor`) to be able to use the different variants in all place where a `Color` is expected.
I also added the HLS representation because:
- I like it
- it's useful for some case, see example `contributors`: I can just change the saturation and lightness while keeping the hue of the color
- I think adding another variant not using `red`, `green`, `blue` makes it clearer there are differences
Co-authored-by: Carter Anderson <mcanders1@gmail.com>
it's a followup of #1550
I think calling explicit methods/values instead of default makes the code easier to read: "what is `Quat::default()`" vs "Oh, it's `Quat::IDENTITY`"
`Transform::identity()` and `GlobalTransform::identity()` can also be consts and I replaced the calls to their `default()` impl with `identity()`
Fixes all warnings from `cargo doc --all`.
Those related to code blocks were introduced in #1612, but re-formatting using the experimental features in `rustfmt.toml` doesn't seem to reintroduce them.
* Adds labels and orderings to systems that need them (uses the new many-to-many labels for InputSystem)
* Removes the Event, PreEvent, Scene, and Ui stages in favor of First, PreUpdate, and PostUpdate (there is more collapsing potential, such as the Asset stages and _maybe_ removing First, but those have more nuance so they should be handled separately)
* Ambiguity detection now prints component conflicts
* Removed broken change filters from flex calculation (which implicitly relied on the z-update system always modifying translation.z). This will require more work to make it behave as expected so i just removed it (and it was already doing this work every frame).
This is an effort to provide the correct `#[reflect_value(...)]` attributes where they are needed.
Supersedes #1533 and resolves#1528.
---
I am working under the following assumptions (thanks to @bjorn3 and @Davier for advice here):
- Any `enum` that derives `Reflect` and one or more of { `Serialize`, `Deserialize`, `PartialEq`, `Hash` } needs a `#[reflect_value(...)]` attribute containing the same subset of { `Serialize`, `Deserialize`, `PartialEq`, `Hash` } that is present on the derive.
- Same as above for `struct` and `#[reflect(...)]`, respectively.
- If a `struct` is used as a component, it should also have `#[reflect(Component)]`
- All reflected types should be registered in their plugins
I treated the following as components (added `#[reflect(Component)]` if necessary):
- `bevy_render`
- `struct RenderLayers`
- `bevy_transform`
- `struct GlobalTransform`
- `struct Parent`
- `struct Transform`
- `bevy_ui`
- `struct Style`
Not treated as components:
- `bevy_math`
- `struct Size<T>`
- `struct Rect<T>`
- Note: The updates for `Size<T>` and `Rect<T>` in `bevy::math::geometry` required using @Davier's suggestion to add `+ PartialEq` to the trait bound. I then registered the specific types used over in `bevy_ui` such as `Size<Val>`, etc. in `bevy_ui`'s plugin, since `bevy::math` does not contain a plugin.
- `bevy_render`
- `struct Color`
- `struct PipelineSpecialization`
- `struct ShaderSpecialization`
- `enum PrimitiveTopology`
- `enum IndexFormat`
Not Addressed:
- I am not searching for components in Bevy that are _not_ reflected. So if there are components that are not reflected that should be reflected, that will need to be figured out in another PR.
- I only added `#[reflect(...)]` or `#[reflect_value(...)]` entries for the set of four traits { `Serialize`, `Deserialize`, `PartialEq`, `Hash` } _if they were derived via `#[derive(...)]`_. I did not look for manual trait implementations of the same set of four, nor did I consider any traits outside the four. Are those other possibilities something that needs to be looked into?
This adds a `EventWriter<T>` `SystemParam` that is just a thin wrapper around `ResMut<Events<T>>`. This is primarily to have API symmetry between the reader and writer, and has the added benefit of easily improving the API later with no breaking changes.
Super simple and straight forward. I need this for the tilemap because if I need to update all chunk indices, then I can calculate it once and clone it. Of course, for now I'm just returning the Vec itself then wrapping it but would be nice if I didn't have to do that.
I was fiddling with creating a mesh importer today, and decided to write some more docs.
A lot of this is describing general renderer/GL stuff, so you'll probably find most of it self explanatory anyway, but perhaps it will be useful for someone.
Fix staging buffer required size calculation (fixes#1056)
The `required_staging_buffer_size` is currently calculated differently in two places, each will be correct in different situations:
* `prepare_staging_buffers()` based on actual `buffer_byte_len()`
* `set_required_staging_buffer_size_to_max()` based on item_size
In the case of render assets, `prepare_staging_buffers()` would only operate over changed assets. If some of the assets didn't change, their size wouldn't be taken into account for the `required_staging_buffer_size`. In some cases, this meant the buffers wouldn't be resized when they should. Now `prepare_staging_buffers()` is called over all assets, which may hit performance but at least gets the size right.
Shortly after `prepare_staging_buffers()`, `set_required_staging_buffer_size_to_max()` would unconditionally overwrite the previously computed value, even if using `item_size` made no sense. Now it only overwrites the value if bigger.
This can be considered a short term hack, but should prevent a few hard to debug panics.
# Bevy ECS V2
This is a rewrite of Bevy ECS (basically everything but the new executor/schedule, which are already awesome). The overall goal was to improve the performance and versatility of Bevy ECS. Here is a quick bulleted list of changes before we dive into the details:
* Complete World rewrite
* Multiple component storage types:
* Tables: fast cache friendly iteration, slower add/removes (previously called Archetypes)
* Sparse Sets: fast add/remove, slower iteration
* Stateful Queries (caches query results for faster iteration. fragmented iteration is _fast_ now)
* Stateful System Params (caches expensive operations. inspired by @DJMcNab's work in #1364)
* Configurable System Params (users can set configuration when they construct their systems. once again inspired by @DJMcNab's work)
* Archetypes are now "just metadata", component storage is separate
* Archetype Graph (for faster archetype changes)
* Component Metadata
* Configure component storage type
* Retrieve information about component size/type/name/layout/send-ness/etc
* Components are uniquely identified by a densely packed ComponentId
* TypeIds are now totally optional (which should make implementing scripting easier)
* Super fast "for_each" query iterators
* Merged Resources into World. Resources are now just a special type of component
* EntityRef/EntityMut builder apis (more efficient and more ergonomic)
* Fast bitset-backed `Access<T>` replaces old hashmap-based approach everywhere
* Query conflicts are determined by component access instead of archetype component access (to avoid random failures at runtime)
* With/Without are still taken into account for conflicts, so this should still be comfy to use
* Much simpler `IntoSystem` impl
* Significantly reduced the amount of hashing throughout the ecs in favor of Sparse Sets (indexed by densely packed ArchetypeId, ComponentId, BundleId, and TableId)
* Safety Improvements
* Entity reservation uses a normal world reference instead of unsafe transmute
* QuerySets no longer transmute lifetimes
* Made traits "unsafe" where relevant
* More thorough safety docs
* WorldCell
* Exposes safe mutable access to multiple resources at a time in a World
* Replaced "catch all" `System::update_archetypes(world: &World)` with `System::new_archetype(archetype: &Archetype)`
* Simpler Bundle implementation
* Replaced slow "remove_bundle_one_by_one" used as fallback for Commands::remove_bundle with fast "remove_bundle_intersection"
* Removed `Mut<T>` query impl. it is better to only support one way: `&mut T`
* Removed with() from `Flags<T>` in favor of `Option<Flags<T>>`, which allows querying for flags to be "filtered" by default
* Components now have is_send property (currently only resources support non-send)
* More granular module organization
* New `RemovedComponents<T>` SystemParam that replaces `query.removed::<T>()`
* `world.resource_scope()` for mutable access to resources and world at the same time
* WorldQuery and QueryFilter traits unified. FilterFetch trait added to enable "short circuit" filtering. Auto impled for cases that don't need it
* Significantly slimmed down SystemState in favor of individual SystemParam state
* System Commands changed from `commands: &mut Commands` back to `mut commands: Commands` (to allow Commands to have a World reference)
Fixes#1320
## `World` Rewrite
This is a from-scratch rewrite of `World` that fills the niche that `hecs` used to. Yes, this means Bevy ECS is no longer a "fork" of hecs. We're going out our own!
(the only shared code between the projects is the entity id allocator, which is already basically ideal)
A huge shout out to @SanderMertens (author of [flecs](https://github.com/SanderMertens/flecs)) for sharing some great ideas with me (specifically hybrid ecs storage and archetype graphs). He also helped advise on a number of implementation details.
## Component Storage (The Problem)
Two ECS storage paradigms have gained a lot of traction over the years:
* **Archetypal ECS**:
* Stores components in "tables" with static schemas. Each "column" stores components of a given type. Each "row" is an entity.
* Each "archetype" has its own table. Adding/removing an entity's component changes the archetype.
* Enables super-fast Query iteration due to its cache-friendly data layout
* Comes at the cost of more expensive add/remove operations for an Entity's components, because all components need to be copied to the new archetype's "table"
* **Sparse Set ECS**:
* Stores components of the same type in densely packed arrays, which are sparsely indexed by densely packed unsigned integers (Entity ids)
* Query iteration is slower than Archetypal ECS because each entity's component could be at any position in the sparse set. This "random access" pattern isn't cache friendly. Additionally, there is an extra layer of indirection because you must first map the entity id to an index in the component array.
* Adding/removing components is a cheap, constant time operation
Bevy ECS V1, hecs, legion, flec, and Unity DOTS are all "archetypal ecs-es". I personally think "archetypal" storage is a good default for game engines. An entity's archetype doesn't need to change frequently in general, and it creates "fast by default" query iteration (which is a much more common operation). It is also "self optimizing". Users don't need to think about optimizing component layouts for iteration performance. It "just works" without any extra boilerplate.
Shipyard and EnTT are "sparse set ecs-es". They employ "packing" as a way to work around the "suboptimal by default" iteration performance for specific sets of components. This helps, but I didn't think this was a good choice for a general purpose engine like Bevy because:
1. "packs" conflict with each other. If bevy decides to internally pack the Transform and GlobalTransform components, users are then blocked if they want to pack some custom component with Transform.
2. users need to take manual action to optimize
Developers selecting an ECS framework are stuck with a hard choice. Select an "archetypal" framework with "fast iteration everywhere" but without the ability to cheaply add/remove components, or select a "sparse set" framework to cheaply add/remove components but with slower iteration performance.
## Hybrid Component Storage (The Solution)
In Bevy ECS V2, we get to have our cake and eat it too. It now has _both_ of the component storage types above (and more can be added later if needed):
* **Tables** (aka "archetypal" storage)
* The default storage. If you don't configure anything, this is what you get
* Fast iteration by default
* Slower add/remove operations
* **Sparse Sets**
* Opt-in
* Slower iteration
* Faster add/remove operations
These storage types complement each other perfectly. By default Query iteration is fast. If developers know that they want to add/remove a component at high frequencies, they can set the storage to "sparse set":
```rust
world.register_component(
ComponentDescriptor:🆕:<MyComponent>(StorageType::SparseSet)
).unwrap();
```
## Archetypes
Archetypes are now "just metadata" ... they no longer store components directly. They do store:
* The `ComponentId`s of each of the Archetype's components (and that component's storage type)
* Archetypes are uniquely defined by their component layouts
* For example: entities with "table" components `[A, B, C]` _and_ "sparse set" components `[D, E]` will always be in the same archetype.
* The `TableId` associated with the archetype
* For now each archetype has exactly one table (which can have no components),
* There is a 1->Many relationship from Tables->Archetypes. A given table could have any number of archetype components stored in it:
* Ex: an entity with "table storage" components `[A, B, C]` and "sparse set" components `[D, E]` will share the same `[A, B, C]` table as an entity with `[A, B, C]` table component and `[F]` sparse set components.
* This 1->Many relationship is how we preserve fast "cache friendly" iteration performance when possible (more on this later)
* A list of entities that are in the archetype and the row id of the table they are in
* ArchetypeComponentIds
* unique densely packed identifiers for (ArchetypeId, ComponentId) pairs
* used by the schedule executor for cheap system access control
* "Archetype Graph Edges" (see the next section)
## The "Archetype Graph"
Archetype changes in Bevy (and a number of other archetypal ecs-es) have historically been expensive to compute. First, you need to allocate a new vector of the entity's current component ids, add or remove components based on the operation performed, sort it (to ensure it is order-independent), then hash it to find the archetype (if it exists). And thats all before we get to the _already_ expensive full copy of all components to the new table storage.
The solution is to build a "graph" of archetypes to cache these results. @SanderMertens first exposed me to the idea (and he got it from @gjroelofs, who came up with it). They propose adding directed edges between archetypes for add/remove component operations. If `ComponentId`s are densely packed, you can use sparse sets to cheaply jump between archetypes.
Bevy takes this one step further by using add/remove `Bundle` edges instead of `Component` edges. Bevy encourages the use of `Bundles` to group add/remove operations. This is largely for "clearer game logic" reasons, but it also helps cut down on the number of archetype changes required. `Bundles` now also have densely-packed `BundleId`s. This allows us to use a _single_ edge for each bundle operation (rather than needing to traverse N edges ... one for each component). Single component operations are also bundles, so this is strictly an improvement over a "component only" graph.
As a result, an operation that used to be _heavy_ (both for allocations and compute) is now two dirt-cheap array lookups and zero allocations.
## Stateful Queries
World queries are now stateful. This allows us to:
1. Cache archetype (and table) matches
* This resolves another issue with (naive) archetypal ECS: query performance getting worse as the number of archetypes goes up (and fragmentation occurs).
2. Cache Fetch and Filter state
* The expensive parts of fetch/filter operations (such as hashing the TypeId to find the ComponentId) now only happen once when the Query is first constructed
3. Incrementally build up state
* When new archetypes are added, we only process the new archetypes (no need to rebuild state for old archetypes)
As a result, the direct `World` query api now looks like this:
```rust
let mut query = world.query::<(&A, &mut B)>();
for (a, mut b) in query.iter_mut(&mut world) {
}
```
Requiring `World` to generate stateful queries (rather than letting the `QueryState` type be constructed separately) allows us to ensure that _all_ queries are properly initialized (and the relevant world state, such as ComponentIds). This enables QueryState to remove branches from its operations that check for initialization status (and also enables query.iter() to take an immutable world reference because it doesn't need to initialize anything in world).
However in systems, this is a non-breaking change. State management is done internally by the relevant SystemParam.
## Stateful SystemParams
Like Queries, `SystemParams` now also cache state. For example, `Query` system params store the "stateful query" state mentioned above. Commands store their internal `CommandQueue`. This means you can now safely use as many separate `Commands` parameters in your system as you want. `Local<T>` system params store their `T` value in their state (instead of in Resources).
SystemParam state also enabled a significant slim-down of SystemState. It is much nicer to look at now.
Per-SystemParam state naturally insulates us from an "aliased mut" class of errors we have hit in the past (ex: using multiple `Commands` system params).
(credit goes to @DJMcNab for the initial idea and draft pr here #1364)
## Configurable SystemParams
@DJMcNab also had the great idea to make SystemParams configurable. This allows users to provide some initial configuration / values for system parameters (when possible). Most SystemParams have no config (the config type is `()`), but the `Local<T>` param now supports user-provided parameters:
```rust
fn foo(value: Local<usize>) {
}
app.add_system(foo.system().config(|c| c.0 = Some(10)));
```
## Uber Fast "for_each" Query Iterators
Developers now have the choice to use a fast "for_each" iterator, which yields ~1.5-3x iteration speed improvements for "fragmented iteration", and minor ~1.2x iteration speed improvements for unfragmented iteration.
```rust
fn system(query: Query<(&A, &mut B)>) {
// you now have the option to do this for a speed boost
query.for_each_mut(|(a, mut b)| {
});
// however normal iterators are still available
for (a, mut b) in query.iter_mut() {
}
}
```
I think in most cases we should continue to encourage "normal" iterators as they are more flexible and more "rust idiomatic". But when that extra "oomf" is needed, it makes sense to use `for_each`.
We should also consider using `for_each` for internal bevy systems to give our users a nice speed boost (but that should be a separate pr).
## Component Metadata
`World` now has a `Components` collection, which is accessible via `world.components()`. This stores mappings from `ComponentId` to `ComponentInfo`, as well as `TypeId` to `ComponentId` mappings (where relevant). `ComponentInfo` stores information about the component, such as ComponentId, TypeId, memory layout, send-ness (currently limited to resources), and storage type.
## Significantly Cheaper `Access<T>`
We used to use `TypeAccess<TypeId>` to manage read/write component/archetype-component access. This was expensive because TypeIds must be hashed and compared individually. The parallel executor got around this by "condensing" type ids into bitset-backed access types. This worked, but it had to be re-generated from the `TypeAccess<TypeId>`sources every time archetypes changed.
This pr removes TypeAccess in favor of faster bitset access everywhere. We can do this thanks to the move to densely packed `ComponentId`s and `ArchetypeComponentId`s.
## Merged Resources into World
Resources had a lot of redundant functionality with Components. They stored typed data, they had access control, they had unique ids, they were queryable via SystemParams, etc. In fact the _only_ major difference between them was that they were unique (and didn't correlate to an entity).
Separate resources also had the downside of requiring a separate set of access controls, which meant the parallel executor needed to compare more bitsets per system and manage more state.
I initially got the "separate resources" idea from `legion`. I think that design was motivated by the fact that it made the direct world query/resource lifetime interactions more manageable. It certainly made our lives easier when using Resources alongside hecs/bevy_ecs. However we already have a construct for safely and ergonomically managing in-world lifetimes: systems (which use `Access<T>` internally).
This pr merges Resources into World:
```rust
world.insert_resource(1);
world.insert_resource(2.0);
let a = world.get_resource::<i32>().unwrap();
let mut b = world.get_resource_mut::<f64>().unwrap();
*b = 3.0;
```
Resources are now just a special kind of component. They have their own ComponentIds (and their own resource TypeId->ComponentId scope, so they don't conflict wit components of the same type). They are stored in a special "resource archetype", which stores components inside the archetype using a new `unique_components` sparse set (note that this sparse set could later be used to implement Tags). This allows us to keep the code size small by reusing existing datastructures (namely Column, Archetype, ComponentFlags, and ComponentInfo). This allows us the executor to use a single `Access<ArchetypeComponentId>` per system. It should also make scripting language integration easier.
_But_ this merge did create problems for people directly interacting with `World`. What if you need mutable access to multiple resources at the same time? `world.get_resource_mut()` borrows World mutably!
## WorldCell
WorldCell applies the `Access<ArchetypeComponentId>` concept to direct world access:
```rust
let world_cell = world.cell();
let a = world_cell.get_resource_mut::<i32>().unwrap();
let b = world_cell.get_resource_mut::<f64>().unwrap();
```
This adds cheap runtime checks (a sparse set lookup of `ArchetypeComponentId` and a counter) to ensure that world accesses do not conflict with each other. Each operation returns a `WorldBorrow<'w, T>` or `WorldBorrowMut<'w, T>` wrapper type, which will release the relevant ArchetypeComponentId resources when dropped.
World caches the access sparse set (and only one cell can exist at a time), so `world.cell()` is a cheap operation.
WorldCell does _not_ use atomic operations. It is non-send, does a mutable borrow of world to prevent other accesses, and uses a simple `Rc<RefCell<ArchetypeComponentAccess>>` wrapper in each WorldBorrow pointer.
The api is currently limited to resource access, but it can and should be extended to queries / entity component access.
## Resource Scopes
WorldCell does not yet support component queries, and even when it does there are sometimes legitimate reasons to want a mutable world ref _and_ a mutable resource ref (ex: bevy_render and bevy_scene both need this). In these cases we could always drop down to the unsafe `world.get_resource_unchecked_mut()`, but that is not ideal!
Instead developers can use a "resource scope"
```rust
world.resource_scope(|world: &mut World, a: &mut A| {
})
```
This temporarily removes the `A` resource from `World`, provides mutable pointers to both, and re-adds A to World when finished. Thanks to the move to ComponentIds/sparse sets, this is a cheap operation.
If multiple resources are required, scopes can be nested. We could also consider adding a "resource tuple" to the api if this pattern becomes common and the boilerplate gets nasty.
## Query Conflicts Use ComponentId Instead of ArchetypeComponentId
For safety reasons, systems cannot contain queries that conflict with each other without wrapping them in a QuerySet. On bevy `main`, we use ArchetypeComponentIds to determine conflicts. This is nice because it can take into account filters:
```rust
// these queries will never conflict due to their filters
fn filter_system(a: Query<&mut A, With<B>>, b: Query<&mut B, Without<B>>) {
}
```
But it also has a significant downside:
```rust
// these queries will not conflict _until_ an entity with A, B, and C is spawned
fn maybe_conflicts_system(a: Query<(&mut A, &C)>, b: Query<(&mut A, &B)>) {
}
```
The system above will panic at runtime if an entity with A, B, and C is spawned. This makes it hard to trust that your game logic will run without crashing.
In this pr, I switched to using `ComponentId` instead. This _is_ more constraining. `maybe_conflicts_system` will now always fail, but it will do it consistently at startup. Naively, it would also _disallow_ `filter_system`, which would be a significant downgrade in usability. Bevy has a number of internal systems that rely on disjoint queries and I expect it to be a common pattern in userspace.
To resolve this, I added a new `FilteredAccess<T>` type, which wraps `Access<T>` and adds with/without filters. If two `FilteredAccess` have with/without values that prove they are disjoint, they will no longer conflict.
## EntityRef / EntityMut
World entity operations on `main` require that the user passes in an `entity` id to each operation:
```rust
let entity = world.spawn((A, )); // create a new entity with A
world.get::<A>(entity);
world.insert(entity, (B, C));
world.insert_one(entity, D);
```
This means that each operation needs to look up the entity location / verify its validity. The initial spawn operation also requires a Bundle as input. This can be awkward when no components are required (or one component is required).
These operations have been replaced by `EntityRef` and `EntityMut`, which are "builder-style" wrappers around world that provide read and read/write operations on a single, pre-validated entity:
```rust
// spawn now takes no inputs and returns an EntityMut
let entity = world.spawn()
.insert(A) // insert a single component into the entity
.insert_bundle((B, C)) // insert a bundle of components into the entity
.id() // id returns the Entity id
// Returns EntityMut (or panics if the entity does not exist)
world.entity_mut(entity)
.insert(D)
.insert_bundle(SomeBundle::default());
{
// returns EntityRef (or panics if the entity does not exist)
let d = world.entity(entity)
.get::<D>() // gets the D component
.unwrap();
// world.get still exists for ergonomics
let d = world.get::<D>(entity).unwrap();
}
// These variants return Options if you want to check existence instead of panicing
world.get_entity_mut(entity)
.unwrap()
.insert(E);
if let Some(entity_ref) = world.get_entity(entity) {
let d = entity_ref.get::<D>().unwrap();
}
```
This _does not_ affect the current Commands api or terminology. I think that should be a separate conversation as that is a much larger breaking change.
## Safety Improvements
* Entity reservation in Commands uses a normal world borrow instead of an unsafe transmute
* QuerySets no longer transmutes lifetimes
* Made traits "unsafe" when implementing a trait incorrectly could cause unsafety
* More thorough safety docs
## RemovedComponents SystemParam
The old approach to querying removed components: `query.removed:<T>()` was confusing because it had no connection to the query itself. I replaced it with the following, which is both clearer and allows us to cache the ComponentId mapping in the SystemParamState:
```rust
fn system(removed: RemovedComponents<T>) {
for entity in removed.iter() {
}
}
```
## Simpler Bundle implementation
Bundles are no longer responsible for sorting (or deduping) TypeInfo. They are just a simple ordered list of component types / data. This makes the implementation smaller and opens the door to an easy "nested bundle" implementation in the future (which i might even add in this pr). Duplicate detection is now done once per bundle type by World the first time a bundle is used.
## Unified WorldQuery and QueryFilter types
(don't worry they are still separate type _parameters_ in Queries .. this is a non-breaking change)
WorldQuery and QueryFilter were already basically identical apis. With the addition of `FetchState` and more storage-specific fetch methods, the overlap was even clearer (and the redundancy more painful).
QueryFilters are now just `F: WorldQuery where F::Fetch: FilterFetch`. FilterFetch requires `Fetch<Item = bool>` and adds new "short circuit" variants of fetch methods. This enables a filter tuple like `(With<A>, Without<B>, Changed<C>)` to stop evaluating the filter after the first mismatch is encountered. FilterFetch is automatically implemented for `Fetch` implementations that return bool.
This forces fetch implementations that return things like `(bool, bool, bool)` (such as the filter above) to manually implement FilterFetch and decide whether or not to short-circuit.
## More Granular Modules
World no longer globs all of the internal modules together. It now exports `core`, `system`, and `schedule` separately. I'm also considering exporting `core` submodules directly as that is still pretty "glob-ey" and unorganized (feedback welcome here).
## Remaining Draft Work (to be done in this pr)
* ~~panic on conflicting WorldQuery fetches (&A, &mut A)~~
* ~~bevy `main` and hecs both currently allow this, but we should protect against it if possible~~
* ~~batch_iter / par_iter (currently stubbed out)~~
* ~~ChangedRes~~
* ~~I skipped this while we sort out #1313. This pr should be adapted to account for whatever we land on there~~.
* ~~The `Archetypes` and `Tables` collections use hashes of sorted lists of component ids to uniquely identify each archetype/table. This hash is then used as the key in a HashMap to look up the relevant ArchetypeId or TableId. (which doesn't handle hash collisions properly)~~
* ~~It is currently unsafe to generate a Query from "World A", then use it on "World B" (despite the api claiming it is safe). We should probably close this gap. This could be done by adding a randomly generated WorldId to each world, then storing that id in each Query. They could then be compared to each other on each `query.do_thing(&world)` operation. This _does_ add an extra branch to each query operation, so I'm open to other suggestions if people have them.~~
* ~~Nested Bundles (if i find time)~~
## Potential Future Work
* Expand WorldCell to support queries.
* Consider not allocating in the empty archetype on `world.spawn()`
* ex: return something like EntityMutUninit, which turns into EntityMut after an `insert` or `insert_bundle` op
* this actually regressed performance last time i tried it, but in theory it should be faster
* Optimize SparseSet::insert (see `PERF` comment on insert)
* Replace SparseArray `Option<T>` with T::MAX to cut down on branching
* would enable cheaper get_unchecked() operations
* upstream fixedbitset optimizations
* fixedbitset could be allocation free for small block counts (store blocks in a SmallVec)
* fixedbitset could have a const constructor
* Consider implementing Tags (archetype-specific by-value data that affects archetype identity)
* ex: ArchetypeA could have `[A, B, C]` table components and `[D(1)]` "tag" component. ArchetypeB could have `[A, B, C]` table components and a `[D(2)]` tag component. The archetypes are different, despite both having D tags because the value inside D is different.
* this could potentially build on top of the `archetype.unique_components` added in this pr for resource storage.
* Consider reverting `all_tuples` proc macro in favor of the old `macro_rules` implementation
* all_tuples is more flexible and produces cleaner documentation (the macro_rules version produces weird type parameter orders due to parser constraints)
* but unfortunately all_tuples also appears to make Rust Analyzer sad/slow when working inside of `bevy_ecs` (does not affect user code)
* Consider "resource queries" and/or "mixed resource and entity component queries" as an alternative to WorldCell
* this is basically just "systems" so maybe it's not worth it
* Add more world ops
* `world.clear()`
* `world.reserve<T: Bundle>(count: usize)`
* Try using the old archetype allocation strategy (allocate new memory on resize and copy everything over). I expect this to improve batch insertion performance at the cost of unbatched performance. But thats just a guess. I'm not an allocation perf pro :)
* Adapt Commands apis for consistency with new World apis
## Benchmarks
key:
* `bevy_old`: bevy `main` branch
* `bevy`: this branch
* `_foreach`: uses an optimized for_each iterator
* ` _sparse`: uses sparse set storage (if unspecified assume table storage)
* `_system`: runs inside a system (if unspecified assume test happens via direct world ops)
### Simple Insert (from ecs_bench_suite)
![image](https://user-images.githubusercontent.com/2694663/109245573-9c3ce100-7795-11eb-9003-bfd41cd5c51f.png)
### Simpler Iter (from ecs_bench_suite)
![image](https://user-images.githubusercontent.com/2694663/109245795-ffc70e80-7795-11eb-92fb-3ffad09aabf7.png)
### Fragment Iter (from ecs_bench_suite)
![image](https://user-images.githubusercontent.com/2694663/109245849-0fdeee00-7796-11eb-8d25-eb6b7a682c48.png)
### Sparse Fragmented Iter
Iterate a query that matches 5 entities from a single matching archetype, but there are 100 unmatching archetypes
![image](https://user-images.githubusercontent.com/2694663/109245916-2b49f900-7796-11eb-9a8f-ed89c203f940.png)
### Schedule (from ecs_bench_suite)
![image](https://user-images.githubusercontent.com/2694663/109246428-1fab0200-7797-11eb-8841-1b2161e90fa4.png)
### Add Remove Component (from ecs_bench_suite)
![image](https://user-images.githubusercontent.com/2694663/109246492-39e4e000-7797-11eb-8985-2706bd0495ab.png)
### Add Remove Component Big
Same as the test above, but each entity has 5 "large" matrix components and 1 "large" matrix component is added and removed
![image](https://user-images.githubusercontent.com/2694663/109246517-449f7500-7797-11eb-835e-28b6790daeaa.png)
### Get Component
Looks up a single component value a large number of times
![image](https://user-images.githubusercontent.com/2694663/109246129-87ad1880-7796-11eb-9fcb-c38012aa7c70.png)
This PR implements wireframe rendering.
Usage:
This is now ready as soon as #1401 gets merged.
Usage:
```rust
app
.insert_resource(WgpuOptions {
name: Some("3d_scene"),
features: WgpuFeatures::NON_FILL_POLYGON_MODE,
..Default::default()
}) // To enable the NON_FILL_POLYGON_MODE feature
.add_plugin(WireframePlugin)
.run();
```
Now we just need to add the Wireframe component on an entity, and it'll draw. its wireframe.
We can also enable wireframe drawing globally by setting the global property in the `WireframeConfig` resource to `true`.
Co-authored-by: Zhixing Zhang <me@neoto.xin>
Adds the original type_name to `NodeState`, enabling plugins like [this](https://github.com/jakobhellermann/bevy_mod_debugdump).
This does increase the `NodeState` type by 16 bytes, but it is already 176 so it's not that big of an increase.
`RenderGraph` errors only give the `Uuid` of the node. So for my graphviz dot based visualization of the `RenderGraph` I really wanted to show it to the user. I think it makes sense to have it accessible for at least debugging purposes.
This PR is easiest to review commit by commit.
Followup on https://github.com/bevyengine/bevy/pull/1309#issuecomment-767310084
- [x] Switch from a bash script to an xtask rust workspace member.
- Results in ~30s longer CI due to compilation of the xtask itself
- Enables Bevy contributors on any platform to run `cargo ci` to run linting -- if the default available Rust is the same version as on CI, then the command should give an identical result.
- [x] Use the xtask from official CI so there's only one place to update.
- [x] Bonus: Run clippy on the _entire_ workspace (existing CI setup was missing the `--workspace` flag
- [x] Clean up newly-exposed clippy errors
~#1388 builds on this to clean up newly discovered clippy errors -- I thought it might be nicer as a separate PR.~ Nope, merged it into this one so CI would pass.
Co-authored-by: Carter Anderson <mcanders1@gmail.com>
For some cases, like driving a full screen fragment shader, it is sometimes convenient to not have to create and upload a mesh because the necessary vertices are simple to synthesize in the vertex shader. Bevy's existing pipeline compiler assumes that there will always be a vertex buffer. This PR changes that such that vertex buffer descriptor is only added to the pipeline layout if there are vertex attributes in the shader.
The `Texture::convert` function previously was only compiled when
one of the image format features (`png`, `jpeg` etc.) were enabled.
The `bevy_sprite` crate needs this function though, which led
to compilation errors when using `cargo check --no-default-features
--features render`.
Now the `convert` function has no features and the `texture_to_image`
and `image_to_texture` utilites functions are in an unconditionally
compiled module.
* use `length_squared` for visible entities
* ortho projection 2d/3d different depth calculation
* use ScalingMode::FixedVertical for 3d ortho
* new example: 3d orthographic
* add normalized orthographic projection
* custom scale for ScaledOrthographicProjection
* allow choosing base axis for ScaledOrthographicProjection
* cargo fmt
* add general (scaled) orthographic camera bundle
FIXME: does the same "far" trick from Camera2DBundle make any sense here?
* fixes
* camera bundles: rename and new ortho constructors
* unify orthographic projections
* give PerspectiveCameraBundle constructors like those of OrthographicCameraBundle
* update examples with new camera bundle syntax
* rename CameraUiBundle to UiCameraBundle
* update examples
* ScalingMode::None
* remove extra blank lines
* sane default bounds for orthographic projection
* fix alien_cake_addict example
* reorder ScalingMode enum variants
* ios example fix
* Remove AHashExt
There is little benefit of Hash*::new() over Hash*::default(), but it
does require more code that needs to be duplicated for every Hash* in
bevy_utils. It may also slightly increase compile times.
* Add StableHash* to bevy_utils
* Use StableHashMap instead of HashMap + BTreeSet for diagnostics
This is a significant reduction in the release mode compile times of
bevy_diagnostics
```
Benchmark #1: touch crates/bevy_diagnostic/src/lib.rs && cargo build --release -p bevy_diagnostic -j1
Time (mean ± σ): 3.645 s ± 0.009 s [User: 3.551 s, System: 0.094 s]
Range (min … max): 3.632 s … 3.658 s 20 runs
```
```
Benchmark #1: touch crates/bevy_diagnostic/src/lib.rs && cargo build --release -p bevy_diagnostic -j1
Time (mean ± σ): 2.938 s ± 0.012 s [User: 2.850 s, System: 0.090 s]
Range (min … max): 2.919 s … 2.969 s 20 runs
```
* only update global transforms when they (or their ancestors) have changed
* only update render resource nodes when they have changed (quality check plz)
* only update entity mesh specialization when mesh (or mesh component) has changed
* only update sprite size when changed
* remove stale bind groups
* fix setting size of loading sprites
* store unmatched render resource binding results
* reduce state changes
* cargo fmt + clippy
* remove cached "NoMatch" results when new bindings are added to RenderResourceBindings
* inline current_entity in world_builder
* try creating bind groups even when they havent changed
* render_resources_node: update all entities when resized
* fmt
Extend the Texture asset type to support 3D data
Textures are still loaded from images as 2D, but they can be reshaped
according to how the render pipeline would like to use them.
Also add an example of how this can be used with the texture2DArray uniform type.
* Add rectangular cuboid shape
Co-authored-by: Jason Lessard <jason.lessard@usherbrooke.ca>
Co-authored-by: Jason Lessard <jason.lessard@usherbrooke.ca>
Co-authored-by: Carter Anderson <mcanders1@gmail.com>
* Remove cfg!(feature = "metal-auto-capture")
This cfg! has existed since the initial commit, but the corresponding
feature has never been part of Cargo.toml
* Remove unnecessary handle_create_window_events call
* Remove EventLoopProxyPtr wrapper
* Remove unnecessary statics
* Fix unrelated deprecation warning to fix CI