# Objective
Fixes: #3368
Issue was caused by screen size being: `(0, 0)`.
## Solution
Don't update clusters if the screen size is zero. A better solution might be to not render when minimized, but this works in the meantime.
Co-authored-by: John <startoaster23@gmail.com>
# Objective
- Only bevy_render should depend directly on wgpu
- This helps to make sure bevy_render re-exports everything needed from wgpu
## Solution
- Remove bevy_pbr, bevy_sprite and bevy_ui dependency on wgpu
Co-authored-by: François <8672791+mockersf@users.noreply.github.com>
# Objective
- Storages are used to store the ECS data.
- They're undocumented.
## Solution
- Add some very basic docs.
## Notes
- Some of this was hard to immediately understand when reading the code, so suggestions on improvements / things to add are particularly welcome.
# Objective
Fixes#3379
## Solution
The custom mesh pipelines needed to be specialized on each mesh's primitive topology, as done in `queue_meshes()`
Co-authored-by: Carter Anderson <mcanders1@gmail.com>
# Objective
This PR implements the `overflow` style property in `bevy_ui`. When set to `Overflow::Hidden`, the children of that node are clipped so that overflowing parts are not rendered. This is an important building block for UI widgets.
## Solution
Clipping is done on the CPU so that it does not break batching.
The clip regions update was implemented as a separate system for clarity, but it could be merged with the other UI systems to avoid doing an additional tree traversal. (I don't think it's important until we fix the layout performance issues though).
A scrolling list was added to the `ui_pipelined` example to showcase `Overflow::Hidden`. For the sake of simplicity, it can only be scrolled with a mouse.
# Objective
Fixes#3352Fixes#3208
## Solution
- Update wgpu to 0.12
- Update naga to 0.8
- Resolve compilation errors
- Remove [[block]] from WGSL shaders (because it is depracated and now wgpu cant parse it)
- Replace `elseif` with `else if` in pbr.wgsl
Fixes#2566Fixes#3005
There are only READMEs in the 4 crates here (with the exception of bevy itself).
Those 4 crates are ecs, reflect, tasks, and transform.
These should each now include their respective README files.
Co-authored-by: Hoidigan <57080125+Hoidigan@users.noreply.github.com>
Co-authored-by: Daniel Nelsen <57080125+Hoidigan@users.noreply.github.com>
# Objective
I thought I'd have a go a trying to fix#2597.
Hopefully fixes#2597.
## Solution
I reused the memory pointed to by the value parameter, that is already required by `insert` to not be dropped, to contain the extracted value while dropping it.
# Objective
- The multiple windows example which was viciously murdered in #3175.
- cart asked me to
## Solution
- Rework the example to work on pipelined-rendering, based on the work from #2898
# Objective
Every time I come back to Bevy I face the same issue: how do I draw a rectangle again? How did that work? So I go to https://github.com/bevyengine/bevy/tree/main/examples in the hope of finding literally the simplest possible example that draws something on the screen without any dependency such as an image. I don't want to have to add some image first, I just quickly want to get something on the screen with `main.rs` alone so that I can continue building on from that point on. Such an example is particularly helpful for a quick start for smaller projects that don't even need any assets such as images (this is my case currently).
Currently every single example of https://github.com/bevyengine/bevy/tree/main/examples#2d-rendering (which is the first section after hello world that beginners will look for for very minimalistic and quick examples) depends on at least an asset or is too complex. This PR solves this.
It also serves as a great comparison for a beginner to realize what Bevy is really like and how different it is from what they may expect Bevy to be. For example for someone coming from [LÖVE](https://love2d.org/), they will have something like this in their head when they think of drawing a rectangle:
```lua
function love.draw()
love.graphics.setColor(0.25, 0.25, 0.75);
love.graphics.rectangle("fill", 0, 0, 50, 50);
end
```
This, of course, differs quite a lot from what you do in Bevy. I imagine there will be people that just want to see something as simple as this in comparison to have a better understanding for the amount of differences.
## Solution
Add a dead simple example drawing a blue 50x50 rectangle in the center with no more and no less than needed.
# Objective
Have the bird spawning/collision systems in bevymark use the proper window size, instead of the size set in WindowDescriptor which isn't updated when the window is resized.
## Solution
Use the Windows resource to grab the width/height from the primary window. This is consistent with the other examples.
These derives seem to be leftover vestiges of the old renderer.
At least removing them doesn't seem to harm anything.
edit: thanks `@forbjok` on discord for pointing this out.
# Objective
- There are a few warnings when building Bevy docs for dead links
- CI seems to not catch those warnings when it should
## Solution
- Enable doc CI on all Bevy workspace
- Fix warnings
- Also noticed plugin GilrsPlugin was not added anymore when feature was enabled
First commit to check that CI would actually fail with it: https://github.com/bevyengine/bevy/runs/4532652688?check_suite_focus=true
Co-authored-by: François <8672791+mockersf@users.noreply.github.com>
# Objective
- It isn't very useful to be able to enable feature `trace_chrome` on its own
## Solution
- Enable `trace` feature when enabling `trace_chrome` or `trace_tracy`
Co-authored-by: François <8672791+mockersf@users.noreply.github.com>
# Objective
- fixes#3344
- have example run faster
## Solution
- thanks to the nice folks at wgpu, I was able to switch from swift shader to lavapipe which is faster
- I also reduced the runtime for some of the examples
- I enabled the trace_chrome feature on the examples, and stored the results as an artefact. it can be useful to debug
- runtime is back to around 10 minutes
Co-authored-by: François <8672791+mockersf@users.noreply.github.com>
# Objective And Solution
Add `set_scissor_rect` from wgpu-rs to the `TrackedRenderPass`. wgpu documentation can be found here:
https://docs.rs/wgpu/latest/wgpu/struct.RenderPass.html#method.set_scissor_rect
The reason for adding this is to cull fragments that are outside of the given rect. For my purposes this is extremely useful for UI.
# Objective
- With the removal of the old renderer, Bevy doesn't depend on spirv-reflect 🎉
## Solution
- Remove its advisory from the ignored list
Co-authored-by: François <8672791+mockersf@users.noreply.github.com>
# Objective
Previously, the gilrs system had no explicit relationship to the input
systems. This could potentially cause it to be scheduled after the
input systems, leading to a one-frame lag in gamepad inputs.
This was a regression introduced in #1606 which removed the `PreEvent` stage
## Solution
This adds an explicit `before` relationship to the gilrs plugin,
ensuring that raw gamepad events will be processed on the same frame
that they are generated.
Currently the `ClearPassNode` is not exported, due to an additional `use ...;` in the core pipeline's `lib.rs`. This seems unintentional, as there already is a public glob import in the file.
This just removes the explicit use. If it actually was intentional to keep the node internal, let me know.
# Objective
`KeyCode::*Win` and `KeyCode::*Alt` might be confusing for some Mac users.
## Solution
Added some small documentation to clarify the mappings for those developing on a Mac.
## Additional Context
Related issue: #3240
# Objective
This PR fixes a crash when winit is enabled when there is a camera in the world. Part of #3155
## Solution
In this PR, I removed two unwraps and added an example for regression testing.
# Objective
PBR lighting was broken in the new renderer when using orthographic projections due to the way the depth slicing works for the clusters. Fix it.
## Solution
- The default orthographic projection near plane is 0.0. The perspective projection depth slicing does a division by the near plane which gives a floating point NaN and the clustering all breaks down.
- Orthographic projections have a linear depth mapping, so it made intuitive sense to me to do depth slicing with a linear mapping too. The alternative I saw was to try to handle the near plane being at 0.0 and using the exponential depth slicing, but that felt like a hack that didn't make sense.
- As such, I have added code that detects whether the projection is orthographic based on `projection[3][3] == 1.0` and then implemented the orthographic mapping case throughout (when computing cluster AABBs, and when mapping a view space position (or light) to a cluster id in both the rust and shader code).
## Screenshots
Before:
![before](https://user-images.githubusercontent.com/302146/145847278-5b1bca74-fbad-4cc5-8b49-384f6a377fdc.png)
After:
<img width="1392" alt="Screenshot 2021-12-13 at 16 36 53" src="https://user-images.githubusercontent.com/302146/145847314-6f3a2035-5d87-4896-8032-0c3e35e15b7d.png">
Old renderer (slightly lighter due to slight difference in configured intensity):
<img width="1392" alt="Screenshot 2021-12-13 at 16 42 23" src="https://user-images.githubusercontent.com/302146/145847391-6a5e6fe0-22da-4fc1-a6c7-440543689a63.png">
This makes the [New Bevy Renderer](#2535) the default (and only) renderer. The new renderer isn't _quite_ ready for the final release yet, but I want as many people as possible to start testing it so we can identify bugs and address feedback prior to release.
The examples are all ported over and operational with a few exceptions:
* I removed a good portion of the examples in the `shader` folder. We still have some work to do in order to make these examples possible / ergonomic / worthwhile: #3120 and "high level shader material plugins" are the big ones. This is a temporary measure.
* Temporarily removed the multiple_windows example: doing this properly in the new renderer will require the upcoming "render targets" changes. Same goes for the render_to_texture example.
* Removed z_sort_debug: entity visibility sort info is no longer available in app logic. we could do this on the "render app" side, but i dont consider it a priority.
Fixes#3043
`surface_texture.present()` will cause panics if no work is done on a given frame. "Views" are how we queue up work. Without any cameras, no work is produced. This adds a "clear pass" for windows without views, which ensures we clear windows (thus doing work) every frame.
This is a "quick fix". It can be made much cleaner once we make "render targets" a concept and move some responsibilities around. Then we just clear the "render target" once instead of clearing "views". I _might_ have time to tackle that work prior to 0.6, but I doubt it. If "render targets" don't make it in to 0.6, they will be one of the first things I tackle after release.
# Objective
Fixes#3247
## Solution
- Added docs to `Axis<T>`. This one can be extended @alice-i-cecile
- Added unit tests
- Added `min` and `max` values to the struct. @alice-i-cecile Do we need to these variables?
- Limited `set` method usage with `min` and `max` values
Co-authored-by: CrazyRoka <rokarostuk@gmail.com>
# Objective
Fixes#3255
## Solution
- mark the `bevy_gltf` feature as required for some examples
This should be cleaned up when we remove the old renderer
# Objective
Port bevy_ui to pipelined-rendering (see #2535 )
## Solution
I did some changes during the port:
- [X] separate color from the texture asset (as suggested [here](https://discord.com/channels/691052431525675048/743663924229963868/874353914525413406))
- [X] ~give the vertex shader a per-instance buffer instead of per-vertex buffer~ (incompatible with batching)
Remaining features to implement to reach parity with the old renderer:
- [x] textures
- [X] TextBundle
I'd also like to add these features, but they need some design discussion:
- [x] batching
- [ ] separate opaque and transparent phases
- [ ] multiple windows
- [ ] texture atlases
- [ ] (maybe) clipping
Updates the wireframe rendering initialliy implemented in https://github.com/bevyengine/bevy/pull/562 to the new renderer.
It lives in `bevy_pbr2` instead of `bevy_render2` because that way it can reuse the `MeshPipeline`.
# Objective
Make it easier to build and use an asset path with `format!()`. This can be useful for accessing assets in a loop.
Enabled by this PR:
```rust
let monkey_handle = asset_server.get_handle(&format!("models/monkey/Monkey.gltf#Mesh0/Primitive0"));
let monkey_handle = asset_server.get_handle(format!("models/monkey/Monkey.gltf#Mesh0/Primitive0"));
```
Before this PR:
```rust
let monkey_handle = asset_server.get_handle(format!("models/monkey/Monkey.gltf#Mesh0/Primitive0").as_str());
```
It's just a tiny improvement in ergonomics, but i ran into it and was wondering why the function does not accept a `String` and Bevy is all about simplicity/ergonomics, right? 😄😉
## Solution
Implement `Into<HandleId>` for `String` and `&String`.
# Objective
- Rendering before MainPass should be possible, so clearing needs to happen in an earlier pass.
- Fixes#3190.
## Solution
- I added a "Clear" SubGraph, a "ClearPassNode" Node, that clears the color and depth attachments of all views and a "ClearNodeDriver" Node, that schedules the "ClearPassNode" before MainPass.
- Make sure that the 2d and 3d draw passes do not clear their attachments anymore.
### Notes
- It works in the way, that with the current pipeline examples nothing should have changed in their behaviour
- I would like to add an example that adds a pass inbetween ClearPass and MainPass, but I do not understand enough about the new render architecture to do that yet
- Clears all attachment for all views: I do not know enough about rendering in general to say, whether there is a use case for not clearing
- Does not solve #3043 as we still need Cameras/ViewTargets to clear.
# Objective
The new renderer does not support any options yet for wgpu. These are needed for example for rendering wireframes (see #3193).
## Solution
I've ported WgpuOptions to bevy_render2.
The defaults match the defaults that were used before this PR (meaning, some specific options when target_arch = wasm32).
Additionally, I removed `Auto` from WgpuBackends and added `Primary`. The default will use primary or GL based on the target_arch.
# Objective
- there are a few new versions for `ron`, `winit`, `ndk`, `raw-window-handle`
- `cargo-deny` is failing due to new security issues / duplicated dependencies
## Solution
- Update our dependencies
- Note all new security issues, with which of Bevy direct dependency it comes from
- Update duplicate crate list, with which of Bevy direct dependency it comes from
`notify` is not updated here as it's in #2993
# Objective
Implement clustered-forward rendering.
## Solution
~~FIXME - in the interest of keeping the merge train moving, I'm submitting this PR now before the description is ready. I want to add in some comments into the code with references for the various bits and pieces and I want to describe some of the key decisions I made here. I'll do that as soon as I can.~~ Anyone reviewing is welcome to add review comments where you want to know more about how something or other works.
* The summary of the technique is that the view frustum is divided into a grid of sub-volumes called clusters, point lights are tested against each of the clusters to see if they would affect that volume within the scene and if so, added to a list of lights affecting that cluster. Then when shading a fragment which is a point on the surface of a mesh within the scene, the point is mapped to a cluster and only the lights affecting that clusters are used in lighting calculations. This brings huge performance and scalability benefits as most of the time lights are placed so that there are not that many that overlap each other in terms of their sphere of influence, but there may be many distinct point lights visible in the scene. Doing all the lighting calculations for all visible lights in the scene for every pixel on the screen quickly becomes a performance limitation. Clustered forward rendering allows us to make an approximate list of lights that affect each pixel, indeed each surface in the scene (as it works along the view z axis too, unlike tiled/forward+).
* WebGL2 is a platform we want to support and it does not support storage buffers. Uniform buffer bindings are limited to a maximum of 16384 bytes per binding. I used bit shifting and masking to pack the cluster light lists and various indices into a uniform buffer and the 16kB limit is very likely the first bottleneck in scaling the number of lights in a scene at the moment if the lights can affect many clusters due to their range or proximity to the camera (there are a lot of clusters close to the camera, which is an area for improvement). We could store the information in textures instead of uniform buffers to remove this bottleneck though I don’t know if there are performance implications to reading from textures instead if uniform buffers.
* Because of the uniform buffer binding size limitations we can support a maximum of 256 lights with the current size of the PointLight struct
* The z-slicing method (i.e. the mapping from view space z to a depth slice which defines the near and far planes of a cluster) is using the Doom 2016 method. I need to add comments with references to this. It’s an exponential function that simplifies well for the purposes of optimising the fragment shader. xy grid divisions are regular in screen space.
* Some optimisation work was done on the allocation of lights to clusters, which involves intersection tests, and for this number of clusters and lights the system has insignificant cost using a fairly naïve algorithm. I think for more lights / finer-grained clusters we could use a BVH, but at some point it would be just much better to use compute shaders and storage buffers.
* Something else to note is that it is absolutely infeasible to use plain cube map point light shadow mapping for many lights. It does not scale in terms of performance nor memory usage. There are some interesting methods I saw discussed in reference material that I will add a link to which render and update shadow maps piece-wise, but they also need compute shaders to work well. Basically for now you need to sacrifice point light shadows for all but a handful of point lights if you don’t want to kill performance. I set the limit to 10 but that’s just what we had from before where 10 was the maximum number of point lights before this PR.
* I added a couple of debug visualisations behind a shader def that were useful for seeing performance impact of light distribution - I should make the debug mode configurable without modifying the shader code. One mode shows the number of lights affecting each cluster by tinting toward red for few lights or green for many lights (maxes out at 16, but not sure that’s a reasonable max). The other shows which cluster the surface at a fragment belongs to by tinting it with a randomish colour. This can help to understand deeper performance issues due to screen space tiles spanning multiple clusters in depth with divergent shader execution times.
Also, there are more things that could be done as improvements, and I will document those somewhere (I'm not sure where will be the best place... in a todo alongside the code, a GitHub issue, somewhere else?) but I think it works well enough and brings significant performance and scalability benefits that it's worth integrating already now and then iterating on.
* Calculate the light’s effective range based on its intensity and physical falloff and either just use this, or take the minimum of the user-supplied range and this. This would avoid unnecessary lighting calculations for clusters that cannot be affected. This would need to take into account HDR tone mapping as in my not-fully-understanding-the-details understanding, the threshold is relative to how bright the scene is.
* Improve the z-slicing to use a larger first slice.
* More gracefully handle the cluster light list uniform buffer binding size limitations by prioritising which lights are included (some heuristic for most significant like closest to the camera, brightest, affecting the most pixels, …)
* Switch to using a texture instead of uniform buffer
* Figure out the / a better story for shadows
I will also probably add an example that demonstrates some of the issues:
* What situations exhaust the space available in the uniform buffers
* Light range too large making lights affect many clusters and so exhausting the space for the lists of lights that affect clusters
* Light range set to be too small producing visible artifacts where clusters the light would physically affect are not affected by the light
* Perhaps some performance issues
* How many lights can be closely packed or affect large portions of the view before performance drops?
Fills in some gaps we had in our Bevy ECS tracing spans:
* Exclusive systems
* System Commands (for `apply_buffers = true` cases)
* System archetype updates
* Parallel system execution prep
Applogies, had to recreate this pr because of branching issue.
Old PR: https://github.com/bevyengine/bevy/pull/3033
# Objective
Fixes#3032
Allowing a user to create a transparent window
## Solution
I've allowed the transparent bool to be passed to the winit window builder
# Objective
Fixes#3181
## Solution
Refactored `contributors.rs` example:
- Renamed unclear variables
- Split setup system into two separate systems
Co-authored-by: CrazyRoka <rokarostuk@gmail.com>
# Objective
- Checks for NaN in computed NDC space coordinates, fixing unexpected NaN in a fallible (`Option<T>`) function.
## Solution
- Adds a NaN check, in addition to the existing NDC bounds checks.
- This is a helper function, and should have no performance impact to the engine itself.
- This will help prevent hard-to-trace NaN propagation in user code, by returning `None` instead of `Some(NaN)`.
Depends on https://github.com/bevyengine/bevy/pull/3269 for CI error fix.