bevy/crates/bevy_pbr/Cargo.toml

80 lines
2.5 KiB
TOML
Raw Normal View History

[package]
name = "bevy_pbr"
version = "0.15.0-dev"
edition = "2021"
2020-08-10 00:24:27 +00:00
description = "Adds PBR rendering to Bevy Engine"
homepage = "https://bevyengine.org"
repository = "https://github.com/bevyengine/bevy"
Relicense Bevy under the dual MIT or Apache-2.0 license (#2509) This relicenses Bevy under the dual MIT or Apache-2.0 license. For rationale, see #2373. * Changes the LICENSE file to describe the dual license. Moved the MIT license to docs/LICENSE-MIT. Added the Apache-2.0 license to docs/LICENSE-APACHE. I opted for this approach over dumping both license files at the root (the more common approach) for a number of reasons: * Github links to the "first" license file (LICENSE-APACHE) in its license links (you can see this in the wgpu and rust-analyzer repos). People clicking these links might erroneously think that the apache license is the only option. Rust and Amethyst both use COPYRIGHT or COPYING files to solve this problem, but this creates more file noise (if you do everything at the root) and the naming feels way less intuitive. * People have a reflex to look for a LICENSE file. By providing a single license file at the root, we make it easy for them to understand our licensing approach. * I like keeping the root clean and noise free * There is precedent for putting the apache and mit license text in sub folders (amethyst) * Removed the `Copyright (c) 2020 Carter Anderson` copyright notice from the MIT license. I don't care about this attribution, it might make license compliance more difficult in some cases, and it didn't properly attribute other contributors. We shoudn't replace it with something like "Copyright (c) 2021 Bevy Contributors" because "Bevy Contributors" is not a legal entity. Instead, we just won't include the copyright line (which has precedent ... Rust also uses this approach). * Updates crates to use the new "MIT OR Apache-2.0" license value * Removes the old legion-transform license file from bevy_transform. bevy_transform has been its own, fully custom implementation for a long time and that license no longer applies. * Added a License section to the main readme * Updated our Bevy Plugin licensing guidelines. As a follow-up we should update the website to properly describe the new license. Closes #2373
2021-07-23 21:11:51 +00:00
license = "MIT OR Apache-2.0"
2020-08-10 00:24:27 +00:00
keywords = ["bevy"]
[features]
webgl = []
Update to wgpu 0.19 and raw-window-handle 0.6 (#11280) # Objective Keep core dependencies up to date. ## Solution Update the dependencies. wgpu 0.19 only supports raw-window-handle (rwh) 0.6, so bumping that was included in this. The rwh 0.6 version bump is just the simplest way of doing it. There might be a way we can take advantage of wgpu's new safe surface creation api, but I'm not familiar enough with bevy's window management to untangle it and my attempt ended up being a mess of lifetimes and rustc complaining about missing trait impls (that were implemented). Thanks to @MiniaczQ for the (much simpler) rwh 0.6 version bump code. Unblocks https://github.com/bevyengine/bevy/pull/9172 and https://github.com/bevyengine/bevy/pull/10812 ~~This might be blocked on cpal and oboe updating their ndk versions to 0.8, as they both currently target ndk 0.7 which uses rwh 0.5.2~~ Tested on android, and everything seems to work correctly (audio properly stops when minimized, and plays when re-focusing the app). --- ## Changelog - `wgpu` has been updated to 0.19! The long awaited arcanization has been merged (for more info, see https://gfx-rs.github.io/2023/11/24/arcanization.html), and Vulkan should now be working again on Intel GPUs. - Targeting WebGPU now requires that you add the new `webgpu` feature (setting the `RUSTFLAGS` environment variable to `--cfg=web_sys_unstable_apis` is still required). This feature currently overrides the `webgl2` feature if you have both enabled (the `webgl2` feature is enabled by default), so it is not recommended to add it as a default feature to libraries without putting it behind a flag that allows library users to opt out of it! In the future we plan on supporting wasm binaries that can target both webgl2 and webgpu now that wgpu added support for doing so (see https://github.com/bevyengine/bevy/issues/11505). - `raw-window-handle` has been updated to version 0.6. ## Migration Guide - `bevy_render::instance_index::get_instance_index()` has been removed as the webgl2 workaround is no longer required as it was fixed upstream in wgpu. The `BASE_INSTANCE_WORKAROUND` shaderdef has also been removed. - WebGPU now requires the new `webgpu` feature to be enabled. The `webgpu` feature currently overrides the `webgl2` feature so you no longer need to disable all default features and re-add them all when targeting `webgpu`, but binaries built with both the `webgpu` and `webgl2` features will only target the webgpu backend, and will only work on browsers that support WebGPU. - Places where you conditionally compiled things for webgl2 need to be updated because of this change, eg: - `#[cfg(any(not(feature = "webgl"), not(target_arch = "wasm32")))]` becomes `#[cfg(any(not(feature = "webgl") ,not(target_arch = "wasm32"), feature = "webgpu"))]` - `#[cfg(all(feature = "webgl", target_arch = "wasm32"))]` becomes `#[cfg(all(feature = "webgl", target_arch = "wasm32", not(feature = "webgpu")))]` - `if cfg!(all(feature = "webgl", target_arch = "wasm32"))` becomes `if cfg!(all(feature = "webgl", target_arch = "wasm32", not(feature = "webgpu")))` - `create_texture_with_data` now also takes a `TextureDataOrder`. You can probably just set this to `TextureDataOrder::default()` - `TextureFormat`'s `block_size` has been renamed to `block_copy_size` - See the `wgpu` changelog for anything I might've missed: https://github.com/gfx-rs/wgpu/blob/trunk/CHANGELOG.md --------- Co-authored-by: François <mockersf@gmail.com>
2024-01-26 18:14:21 +00:00
webgpu = []
`StandardMaterial` Light Transmission (#8015) # Objective <img width="1920" alt="Screenshot 2023-04-26 at 01 07 34" src="https://user-images.githubusercontent.com/418473/234467578-0f34187b-5863-4ea1-88e9-7a6bb8ce8da3.png"> This PR adds both diffuse and specular light transmission capabilities to the `StandardMaterial`, with support for screen space refractions. This enables realistically representing a wide range of real-world materials, such as: - Glass; (Including frosted glass) - Transparent and translucent plastics; - Various liquids and gels; - Gemstones; - Marble; - Wax; - Paper; - Leaves; - Porcelain. Unlike existing support for transparency, light transmission does not rely on fixed function alpha blending, and therefore works with both `AlphaMode::Opaque` and `AlphaMode::Mask` materials. ## Solution - Introduces a number of transmission related fields in the `StandardMaterial`; - For specular transmission: - Adds logic to take a view main texture snapshot after the opaque phase; (in order to perform screen space refractions) - Introduces a new `Transmissive3d` phase to the renderer, to which all meshes with `transmission > 0.0` materials are sent. - Calculates a light exit point (of the approximate mesh volume) using `ior` and `thickness` properties - Samples the snapshot texture with an adaptive number of taps across a `roughness`-controlled radius enabling “blurry” refractions - For diffuse transmission: - Approximates transmitted diffuse light by using a second, flipped + displaced, diffuse-only Lambertian lobe for each light source. ## To Do - [x] Figure out where `fresnel_mix()` is taking place, if at all, and where `dielectric_specular` is being calculated, if at all, and update them to use the `ior` value (Not a blocker, just a nice-to-have for more correct BSDF) - To the _best of my knowledge, this is now taking place, after 964340cdd. The fresnel mix is actually "split" into two parts in our implementation, one `(1 - fresnel(...))` in the transmission, and `fresnel()` in the light implementations. A surface with more reflectance now will produce slightly dimmer transmission towards the grazing angle, as more of the light gets reflected. - [x] Add `transmission_texture` - [x] Add `diffuse_transmission_texture` - [x] Add `thickness_texture` - [x] Add `attenuation_distance` and `attenuation_color` - [x] Connect values to glTF loader - [x] `transmission` and `transmission_texture` - [x] `thickness` and `thickness_texture` - [x] `ior` - [ ] `diffuse_transmission` and `diffuse_transmission_texture` (needs upstream support in `gltf` crate, not a blocker) - [x] Add support for multiple screen space refraction “steps” - [x] Conditionally create no transmission snapshot texture at all if `steps == 0` - [x] Conditionally enable/disable screen space refraction transmission snapshots - [x] Read from depth pre-pass to prevent refracting pixels in front of the light exit point - [x] Use `interleaved_gradient_noise()` function for sampling blur in a way that benefits from TAA - [x] Drill down a TAA `#define`, tweak some aspects of the effect conditionally based on it - [x] Remove const array that's crashing under HLSL (unless a new `naga` release with https://github.com/gfx-rs/naga/pull/2496 comes out before we merge this) - [ ] Look into alternatives to the `switch` hack for dynamically indexing the const array (might not be needed, compilers seem to be decent at expanding it) - [ ] Add pipeline keys for gating transmission (do we really want/need this?) - [x] Tweak some material field/function names? ## A Note on Texture Packing _This was originally added as a comment to the `specular_transmission_texture`, `thickness_texture` and `diffuse_transmission_texture` documentation, I removed it since it was more confusing than helpful, and will likely be made redundant/will need to be updated once we have a better infrastructure for preprocessing assets_ Due to how channels are mapped, you can more efficiently use a single shared texture image for configuring the following: - R - `specular_transmission_texture` - G - `thickness_texture` - B - _unused_ - A - `diffuse_transmission_texture` The `KHR_materials_diffuse_transmission` glTF extension also defines a `diffuseTransmissionColorTexture`, that _we don't currently support_. One might choose to pack the intensity and color textures together, using RGB for the color and A for the intensity, in which case this packing advice doesn't really apply. --- ## Changelog - Added a new `Transmissive3d` render phase for rendering specular transmissive materials with screen space refractions - Added rendering support for transmitted environment map light on the `StandardMaterial` as a fallback for screen space refractions - Added `diffuse_transmission`, `specular_transmission`, `thickness`, `ior`, `attenuation_distance` and `attenuation_color` to the `StandardMaterial` - Added `diffuse_transmission_texture`, `specular_transmission_texture`, `thickness_texture` to the `StandardMaterial`, gated behind a new `pbr_transmission_textures` cargo feature (off by default, for maximum hardware compatibility) - Added `Camera3d::screen_space_specular_transmission_steps` for controlling the number of “layers of transparency” rendered for transmissive objects - Added a `TransmittedShadowReceiver` component for enabling shadows in (diffusely) transmitted light. (disabled by default, as it requires carefully setting up the `thickness` to avoid self-shadow artifacts) - Added support for the `KHR_materials_transmission`, `KHR_materials_ior` and `KHR_materials_volume` glTF extensions - Renamed items related to temporal jitter for greater consistency ## Migration Guide - `SsaoPipelineKey::temporal_noise` has been renamed to `SsaoPipelineKey::temporal_jitter` - The `TAA` shader def (controlled by the presence of the `TemporalAntiAliasSettings` component in the camera) has been replaced with the `TEMPORAL_JITTER` shader def (controlled by the presence of the `TemporalJitter` component in the camera) - `MeshPipelineKey::TAA` has been replaced by `MeshPipelineKey::TEMPORAL_JITTER` - The `TEMPORAL_NOISE` shader def has been consolidated with `TEMPORAL_JITTER`
2023-10-31 20:59:02 +00:00
pbr_transmission_textures = []
Implement clearcoat per the Filament and the `KHR_materials_clearcoat` specifications. (#13031) Clearcoat is a separate material layer that represents a thin translucent layer of a material. Examples include (from the [Filament spec]) car paint, soda cans, and lacquered wood. This commit implements support for clearcoat following the Filament and Khronos specifications, marking the beginnings of support for multiple PBR layers in Bevy. The [`KHR_materials_clearcoat`] specification describes the clearcoat support in glTF. In Blender, applying a clearcoat to the Principled BSDF node causes the clearcoat settings to be exported via this extension. As of this commit, Bevy parses and reads the extension data when present in glTF. Note that the `gltf` crate has no support for `KHR_materials_clearcoat`; this patch therefore implements the JSON semantics manually. Clearcoat is integrated with `StandardMaterial`, but the code is behind a series of `#ifdef`s that only activate when clearcoat is present. Additionally, the `pbr_feature_layer_material_textures` Cargo feature must be active in order to enable support for clearcoat factor maps, clearcoat roughness maps, and clearcoat normal maps. This approach mirrors the same pattern used by the existing transmission feature and exists to avoid running out of texture bindings on platforms like WebGL and WebGPU. Note that constant clearcoat factors and roughness values *are* supported in the browser; only the relatively-less-common maps are disabled on those platforms. This patch refactors the lighting code in `StandardMaterial` significantly in order to better support multiple layers in a natural way. That code was due for a refactor in any case, so this is a nice improvement. A new demo, `clearcoat`, has been added. It's based on [the corresponding three.js demo], but all the assets (aside from the skybox and environment map) are my original work. [Filament spec]: https://google.github.io/filament/Filament.html#materialsystem/clearcoatmodel [`KHR_materials_clearcoat`]: https://github.com/KhronosGroup/glTF/blob/main/extensions/2.0/Khronos/KHR_materials_clearcoat/README.md [the corresponding three.js demo]: https://threejs.org/examples/webgl_materials_physical_clearcoat.html ![Screenshot 2024-04-19 101143](https://github.com/bevyengine/bevy/assets/157897/3444bcb5-5c20-490c-b0ad-53759bd47ae2) ![Screenshot 2024-04-19 102054](https://github.com/bevyengine/bevy/assets/157897/6e953944-75b8-49ef-bc71-97b0a53b3a27) ## Changelog ### Added * `StandardMaterial` now supports a clearcoat layer, which represents a thin translucent layer over an underlying material. * The glTF loader now supports the `KHR_materials_clearcoat` extension, representing materials with clearcoat layers. ## Migration Guide * The lighting functions in the `pbr_lighting` WGSL module now have clearcoat parameters, if `STANDARD_MATERIAL_CLEARCOAT` is defined. * The `R` reflection vector parameter has been removed from some lighting functions, as it was unused.
2024-05-05 22:57:05 +00:00
pbr_multi_layer_material_textures = []
pbr_anisotropy_texture = []
shader_format_glsl = ["bevy_render/shader_format_glsl"]
trace = ["bevy_render/trace"]
ios_simulator = ["bevy_render/ios_simulator"]
# Enables the meshlet renderer for dense high-poly scenes (experimental)
meshlet = ["dep:lz4_flex", "dep:range-alloc", "dep:bevy_tasks"]
# Enables processing meshes into meshlet meshes
Per-meshlet compressed vertex data (#15643) # Objective - Prepare for streaming by storing vertex data per-meshlet, rather than per-mesh (this means duplicating vertices per-meshlet) - Compress vertex data to reduce the cost of this ## Solution The important parts are in from_mesh.rs, the changes to the Meshlet type in asset.rs, and the changes in meshlet_bindings.wgsl. Everything else is pretty secondary/boilerplate/straightforward changes. - Positions are quantized in centimeters with a user-provided power of 2 factor (ideally auto-determined, but that's a TODO for the future), encoded as an offset relative to the minimum value within the meshlet, and then stored as a packed list of bits using the minimum number of bits needed for each vertex position channel for that meshlet - E.g. quantize positions (lossly, throws away precision that's not needed leading to using less bits in the bitstream encoding) - Get the min/max quantized value of each X/Y/Z channel of the quantized positions within a meshlet - Encode values relative to the min value of the meshlet. E.g. convert from [min, max] to [0, max - min] - The new max value in the meshlet is (max - min), which only takes N bits, so we only need N bits to store each channel within the meshlet (lossless) - We can store the min value and that it takes N bits per channel in the meshlet metadata, and reconstruct the position from the bitstream - Normals are octahedral encoded and than snorm2x16 packed and stored as a single u32. - Would be better to implement the precise variant of octhedral encoding for extra precision (no extra decode cost), but decided to keep it simple for now and leave that as a followup - Tried doing a quantizing and bitstream encoding scheme like I did for positions, but struggled to get it smaller. Decided to go with this for simplicity for now - UVs are uncompressed and take a full 64bits per vertex which is expensive - In the future this should be improved - Tangents, as of the previous PR, are not explicitly stored and are instead derived from screen space gradients - While I'm here, split up MeshletMeshSaverLoader into two separate types Other future changes include implementing a smaller encoding of triangle data (3 u8 indices = 24 bits per triangle currently), and more disk-oriented compression schemes. References: * "A Deep Dive into UE5's Nanite Virtualized Geometry" https://advances.realtimerendering.com/s2021/Karis_Nanite_SIGGRAPH_Advances_2021_final.pdf#page=128 (also available on youtube) * "Towards Practical Meshlet Compression" https://arxiv.org/pdf/2404.06359 * "Vertex quantization in Omniforce Game Engine" https://daniilvinn.github.io/2024/05/04/omniforce-vertex-quantization.html ## Testing - Did you test these changes? If so, how? - Converted the stanford bunny, and rendered it with a debug material showing normals, and confirmed that it's identical to what's on main. EDIT: See additional testing in the comments below. - Are there any parts that need more testing? - Could use some more size comparisons on various meshes, and testing different quantization factors. Not sure if 4 is a good default. EDIT: See additional testing in the comments below. - Also did not test runtime performance of the shaders. EDIT: See additional testing in the comments below. - How can other people (reviewers) test your changes? Is there anything specific they need to know? - Use my unholy script, replacing the meshlet example https://paste.rs/7xQHk.rs (must make MeshletMesh fields pub instead of pub crate, must add lz4_flex as a dev-dependency) (must compile with meshlet and meshlet_processor features, mesh must have only positions, normals, and UVs, no vertex colors or tangents) --- ## Migration Guide - TBD by JMS55 at the end of the release
2024-10-08 18:42:55 +00:00
meshlet_processor = [
"meshlet",
"dep:meshopt",
"dep:metis",
"dep:itertools",
"dep:bitvec",
]
[dependencies]
# bevy
bevy_app = { path = "../bevy_app", version = "0.15.0-dev" }
bevy_asset = { path = "../bevy_asset", version = "0.15.0-dev" }
bevy_color = { path = "../bevy_color", version = "0.15.0-dev" }
bevy_core_pipeline = { path = "../bevy_core_pipeline", version = "0.15.0-dev" }
bevy_derive = { path = "../bevy_derive", version = "0.15.0-dev" }
bevy_ecs = { path = "../bevy_ecs", version = "0.15.0-dev" }
bevy_math = { path = "../bevy_math", version = "0.15.0-dev" }
bevy_reflect = { path = "../bevy_reflect", version = "0.15.0-dev", features = [
"bevy",
] }
bevy_render = { path = "../bevy_render", version = "0.15.0-dev" }
bevy_tasks = { path = "../bevy_tasks", version = "0.15.0-dev", optional = true }
bevy_transform = { path = "../bevy_transform", version = "0.15.0-dev" }
bevy_utils = { path = "../bevy_utils", version = "0.15.0-dev" }
bevy_window = { path = "../bevy_window", version = "0.15.0-dev" }
Meshlet continuous LOD (#12755) Adds a basic level of detail system to meshlets. An extremely brief summary is as follows: * In `from_mesh.rs`, once we've built the first level of clusters, we group clusters, simplify the new mega-clusters, and then split the simplified groups back into regular sized clusters. Repeat several times (ideally until you can't anymore). This forms a directed acyclic graph (DAG), where the children are the meshlets from the previous level, and the parents are the more simplified versions of their children. The leaf nodes are meshlets formed from the original mesh. * In `cull_meshlets.wgsl`, each cluster selects whether to render or not based on the LOD bounding sphere (different than the culling bounding sphere) of the current meshlet, the LOD bounding sphere of its parent (the meshlet group from simplification), and the simplification error relative to its children of both the current meshlet and its parent meshlet. This kind of breaks two pass occlusion culling, which will be fixed in a future PR by using an HZB from the previous frame to get the initial list of occluders. Many, _many_ improvements to be done in the future https://github.com/bevyengine/bevy/issues/11518, not least of which is code quality and speed. I don't even expect this to work on many types of input meshes. This is just a basic implementation/draft for collaboration. Arguable how much we want to do in this PR, I'll leave that up to maintainers. I've erred on the side of "as basic as possible". References: * Slides 27-77 (video available on youtube) https://advances.realtimerendering.com/s2021/Karis_Nanite_SIGGRAPH_Advances_2021_final.pdf * https://blog.traverseresearch.nl/creating-a-directed-acyclic-graph-from-a-mesh-1329e57286e5 * https://jglrxavpok.github.io/2024/01/19/recreating-nanite-lod-generation.html, https://jglrxavpok.github.io/2024/03/12/recreating-nanite-faster-lod-generation.html, https://jglrxavpok.github.io/2024/04/02/recreating-nanite-runtime-lod-selection.html, and https://github.com/jglrxavpok/Carrot * https://github.com/gents83/INOX/tree/master/crates/plugins/binarizer/src * https://cs418.cs.illinois.edu/website/text/nanite.html ![image](https://github.com/bevyengine/bevy/assets/47158642/e40bff9b-7d0c-4a19-a3cc-2aad24965977) ![image](https://github.com/bevyengine/bevy/assets/47158642/442c7da3-7761-4da7-9acd-37f15dd13e26) --------- Co-authored-by: Ricky Taylor <rickytaylor26@gmail.com> Co-authored-by: vero <email@atlasdostal.com> Co-authored-by: François <mockersf@gmail.com> Co-authored-by: atlas dostal <rodol@rivalrebels.com> Co-authored-by: Patrick Walton <pcwalton@mimiga.net>
2024-04-23 21:43:53 +00:00
# other
bitflags = "2.3"
fixedbitset = "0.5"
Meshlet continuous LOD (#12755) Adds a basic level of detail system to meshlets. An extremely brief summary is as follows: * In `from_mesh.rs`, once we've built the first level of clusters, we group clusters, simplify the new mega-clusters, and then split the simplified groups back into regular sized clusters. Repeat several times (ideally until you can't anymore). This forms a directed acyclic graph (DAG), where the children are the meshlets from the previous level, and the parents are the more simplified versions of their children. The leaf nodes are meshlets formed from the original mesh. * In `cull_meshlets.wgsl`, each cluster selects whether to render or not based on the LOD bounding sphere (different than the culling bounding sphere) of the current meshlet, the LOD bounding sphere of its parent (the meshlet group from simplification), and the simplification error relative to its children of both the current meshlet and its parent meshlet. This kind of breaks two pass occlusion culling, which will be fixed in a future PR by using an HZB from the previous frame to get the initial list of occluders. Many, _many_ improvements to be done in the future https://github.com/bevyengine/bevy/issues/11518, not least of which is code quality and speed. I don't even expect this to work on many types of input meshes. This is just a basic implementation/draft for collaboration. Arguable how much we want to do in this PR, I'll leave that up to maintainers. I've erred on the side of "as basic as possible". References: * Slides 27-77 (video available on youtube) https://advances.realtimerendering.com/s2021/Karis_Nanite_SIGGRAPH_Advances_2021_final.pdf * https://blog.traverseresearch.nl/creating-a-directed-acyclic-graph-from-a-mesh-1329e57286e5 * https://jglrxavpok.github.io/2024/01/19/recreating-nanite-lod-generation.html, https://jglrxavpok.github.io/2024/03/12/recreating-nanite-faster-lod-generation.html, https://jglrxavpok.github.io/2024/04/02/recreating-nanite-runtime-lod-selection.html, and https://github.com/jglrxavpok/Carrot * https://github.com/gents83/INOX/tree/master/crates/plugins/binarizer/src * https://cs418.cs.illinois.edu/website/text/nanite.html ![image](https://github.com/bevyengine/bevy/assets/47158642/e40bff9b-7d0c-4a19-a3cc-2aad24965977) ![image](https://github.com/bevyengine/bevy/assets/47158642/442c7da3-7761-4da7-9acd-37f15dd13e26) --------- Co-authored-by: Ricky Taylor <rickytaylor26@gmail.com> Co-authored-by: vero <email@atlasdostal.com> Co-authored-by: François <mockersf@gmail.com> Co-authored-by: atlas dostal <rodol@rivalrebels.com> Co-authored-by: Patrick Walton <pcwalton@mimiga.net>
2024-04-23 21:43:53 +00:00
# meshlet
lz4_flex = { version = "0.11", default-features = false, features = [
"frame",
], optional = true }
derive_more = { version = "1", default-features = false, features = [
"error",
"from",
"display",
] }
range-alloc = { version = "0.1.3", optional = true }
meshopt = { version = "0.3.0", optional = true }
Meshlet continuous LOD (#12755) Adds a basic level of detail system to meshlets. An extremely brief summary is as follows: * In `from_mesh.rs`, once we've built the first level of clusters, we group clusters, simplify the new mega-clusters, and then split the simplified groups back into regular sized clusters. Repeat several times (ideally until you can't anymore). This forms a directed acyclic graph (DAG), where the children are the meshlets from the previous level, and the parents are the more simplified versions of their children. The leaf nodes are meshlets formed from the original mesh. * In `cull_meshlets.wgsl`, each cluster selects whether to render or not based on the LOD bounding sphere (different than the culling bounding sphere) of the current meshlet, the LOD bounding sphere of its parent (the meshlet group from simplification), and the simplification error relative to its children of both the current meshlet and its parent meshlet. This kind of breaks two pass occlusion culling, which will be fixed in a future PR by using an HZB from the previous frame to get the initial list of occluders. Many, _many_ improvements to be done in the future https://github.com/bevyengine/bevy/issues/11518, not least of which is code quality and speed. I don't even expect this to work on many types of input meshes. This is just a basic implementation/draft for collaboration. Arguable how much we want to do in this PR, I'll leave that up to maintainers. I've erred on the side of "as basic as possible". References: * Slides 27-77 (video available on youtube) https://advances.realtimerendering.com/s2021/Karis_Nanite_SIGGRAPH_Advances_2021_final.pdf * https://blog.traverseresearch.nl/creating-a-directed-acyclic-graph-from-a-mesh-1329e57286e5 * https://jglrxavpok.github.io/2024/01/19/recreating-nanite-lod-generation.html, https://jglrxavpok.github.io/2024/03/12/recreating-nanite-faster-lod-generation.html, https://jglrxavpok.github.io/2024/04/02/recreating-nanite-runtime-lod-selection.html, and https://github.com/jglrxavpok/Carrot * https://github.com/gents83/INOX/tree/master/crates/plugins/binarizer/src * https://cs418.cs.illinois.edu/website/text/nanite.html ![image](https://github.com/bevyengine/bevy/assets/47158642/e40bff9b-7d0c-4a19-a3cc-2aad24965977) ![image](https://github.com/bevyengine/bevy/assets/47158642/442c7da3-7761-4da7-9acd-37f15dd13e26) --------- Co-authored-by: Ricky Taylor <rickytaylor26@gmail.com> Co-authored-by: vero <email@atlasdostal.com> Co-authored-by: François <mockersf@gmail.com> Co-authored-by: atlas dostal <rodol@rivalrebels.com> Co-authored-by: Patrick Walton <pcwalton@mimiga.net>
2024-04-23 21:43:53 +00:00
metis = { version = "0.2", optional = true }
Update itertools requirement from 0.12 to 0.13 (#13526) Updates the requirements on [itertools](https://github.com/rust-itertools/itertools) to permit the latest version. <details> <summary>Changelog</summary> <p><em>Sourced from <a href="https://github.com/rust-itertools/itertools/blob/master/CHANGELOG.md">itertools's changelog</a>.</em></p> <blockquote> <h2>0.13.0</h2> <h3>Breaking</h3> <ul> <li>Removed implementation of <code>DoubleEndedIterator</code> for <code>ConsTuples</code> (<a href="https://redirect.github.com/rust-itertools/itertools/issues/853">#853</a>)</li> <li>Made <code>MultiProduct</code> fused and fixed on an empty iterator (<a href="https://redirect.github.com/rust-itertools/itertools/issues/835">#835</a>, <a href="https://redirect.github.com/rust-itertools/itertools/issues/834">#834</a>)</li> <li>Changed <code>iproduct!</code> to return tuples for maxi one iterator too (<a href="https://redirect.github.com/rust-itertools/itertools/issues/870">#870</a>)</li> <li>Changed <code>PutBack::put_back</code> to return the old value (<a href="https://redirect.github.com/rust-itertools/itertools/issues/880">#880</a>)</li> <li>Removed deprecated <code>repeat_call, Itertools::{foreach, step, map_results, fold_results}</code> (<a href="https://redirect.github.com/rust-itertools/itertools/issues/878">#878</a>)</li> <li>Removed <code>TakeWhileInclusive::new</code> (<a href="https://redirect.github.com/rust-itertools/itertools/issues/912">#912</a>)</li> </ul> <h3>Added</h3> <ul> <li>Added <code>Itertools::{smallest_by, smallest_by_key, largest, largest_by, largest_by_key}</code> (<a href="https://redirect.github.com/rust-itertools/itertools/issues/654">#654</a>, <a href="https://redirect.github.com/rust-itertools/itertools/issues/885">#885</a>)</li> <li>Added <code>Itertools::tail</code> (<a href="https://redirect.github.com/rust-itertools/itertools/issues/899">#899</a>)</li> <li>Implemented <code>DoubleEndedIterator</code> for <code>ProcessResults</code> (<a href="https://redirect.github.com/rust-itertools/itertools/issues/910">#910</a>)</li> <li>Implemented <code>Debug</code> for <code>FormatWith</code> (<a href="https://redirect.github.com/rust-itertools/itertools/issues/931">#931</a>)</li> <li>Added <code>Itertools::get</code> (<a href="https://redirect.github.com/rust-itertools/itertools/issues/891">#891</a>)</li> </ul> <h3>Changed</h3> <ul> <li>Deprecated <code>Itertools::group_by</code> (renamed <code>chunk_by</code>) (<a href="https://redirect.github.com/rust-itertools/itertools/issues/866">#866</a>, <a href="https://redirect.github.com/rust-itertools/itertools/issues/879">#879</a>)</li> <li>Deprecated <code>unfold</code> (use <code>std::iter::from_fn</code> instead) (<a href="https://redirect.github.com/rust-itertools/itertools/issues/871">#871</a>)</li> <li>Optimized <code>GroupingMapBy</code> (<a href="https://redirect.github.com/rust-itertools/itertools/issues/873">#873</a>, <a href="https://redirect.github.com/rust-itertools/itertools/issues/876">#876</a>)</li> <li>Relaxed <code>Fn</code> bounds to <code>FnMut</code> in <code>diff_with, Itertools::into_group_map_by</code> (<a href="https://redirect.github.com/rust-itertools/itertools/issues/886">#886</a>)</li> <li>Relaxed <code>Debug/Clone</code> bounds for <code>MapInto</code> (<a href="https://redirect.github.com/rust-itertools/itertools/issues/889">#889</a>)</li> <li>Documented the <code>use_alloc</code> feature (<a href="https://redirect.github.com/rust-itertools/itertools/issues/887">#887</a>)</li> <li>Optimized <code>Itertools::set_from</code> (<a href="https://redirect.github.com/rust-itertools/itertools/issues/888">#888</a>)</li> <li>Removed badges in <code>README.md</code> (<a href="https://redirect.github.com/rust-itertools/itertools/issues/890">#890</a>)</li> <li>Added &quot;no-std&quot; categories in <code>Cargo.toml</code> (<a href="https://redirect.github.com/rust-itertools/itertools/issues/894">#894</a>)</li> <li>Fixed <code>Itertools::k_smallest</code> on short unfused iterators (<a href="https://redirect.github.com/rust-itertools/itertools/issues/900">#900</a>)</li> <li>Deprecated <code>Itertools::tree_fold1</code> (renamed <code>tree_reduce</code>) (<a href="https://redirect.github.com/rust-itertools/itertools/issues/895">#895</a>)</li> <li>Deprecated <code>GroupingMap::fold_first</code> (renamed <code>reduce</code>) (<a href="https://redirect.github.com/rust-itertools/itertools/issues/902">#902</a>)</li> <li>Fixed <code>Itertools::k_smallest(0)</code> to consume the iterator, optimized <code>Itertools::k_smallest(1)</code> (<a href="https://redirect.github.com/rust-itertools/itertools/issues/909">#909</a>)</li> <li>Specialized <code>Combinations::nth</code> (<a href="https://redirect.github.com/rust-itertools/itertools/issues/914">#914</a>)</li> <li>Specialized <code>MergeBy::fold</code> (<a href="https://redirect.github.com/rust-itertools/itertools/issues/920">#920</a>)</li> <li>Specialized <code>CombinationsWithReplacement::nth</code> (<a href="https://redirect.github.com/rust-itertools/itertools/issues/923">#923</a>)</li> <li>Specialized <code>FlattenOk::{fold, rfold}</code> (<a href="https://redirect.github.com/rust-itertools/itertools/issues/927">#927</a>)</li> <li>Specialized <code>Powerset::nth</code> (<a href="https://redirect.github.com/rust-itertools/itertools/issues/924">#924</a>)</li> <li>Documentation fixes (<a href="https://redirect.github.com/rust-itertools/itertools/issues/882">#882</a>, <a href="https://redirect.github.com/rust-itertools/itertools/issues/936">#936</a>)</li> <li>Fixed <code>assert_equal</code> for iterators longer than <code>i32::MAX</code> (<a href="https://redirect.github.com/rust-itertools/itertools/issues/932">#932</a>)</li> <li>Updated the <code>must_use</code> message of non-lazy <code>KMergeBy</code> and <code>TupleCombinations</code> (<a href="https://redirect.github.com/rust-itertools/itertools/issues/939">#939</a>)</li> </ul> <h3>Notable Internal Changes</h3> <ul> <li>Tested iterator laziness (<a href="https://redirect.github.com/rust-itertools/itertools/issues/792">#792</a>)</li> <li>Created <code>CONTRIBUTING.md</code> (<a href="https://redirect.github.com/rust-itertools/itertools/issues/767">#767</a>)</li> </ul> <h2>0.12.1</h2> <h3>Added</h3> <ul> <li>Documented iteration order guarantee for <code>Itertools::[tuple_]combinations</code> (<a href="https://redirect.github.com/rust-itertools/itertools/issues/822">#822</a>)</li> <li>Documented possible panic in <code>iterate</code> (<a href="https://redirect.github.com/rust-itertools/itertools/issues/842">#842</a>)</li> <li>Implemented <code>Clone</code> and <code>Debug</code> for <code>Diff</code> (<a href="https://redirect.github.com/rust-itertools/itertools/issues/845">#845</a>)</li> </ul> <!-- raw HTML omitted --> </blockquote> <p>... (truncated)</p> </details> <details> <summary>Commits</summary> <ul> <li><a href="https://github.com/rust-itertools/itertools/commit/d5084d15e959b85d89a49e5cd33ad6267bc541a3"><code>d5084d1</code></a> Prepare v0.13.0 release (<a href="https://redirect.github.com/rust-itertools/itertools/issues/937">#937</a>)</li> <li><a href="https://github.com/rust-itertools/itertools/commit/d7c99d55daeaa76f482444e95beb99f5744ced4e"><code>d7c99d5</code></a> <code>TupleCombinations</code> is not lazy but must be used nonetheless</li> <li><a href="https://github.com/rust-itertools/itertools/commit/074c7fcc07c2bfd60f238585c05134ea3eb43f77"><code>074c7fc</code></a> <code>KMergeBy</code> is not lazy but must be used nonetheless</li> <li><a href="https://github.com/rust-itertools/itertools/commit/2ad9e07ae860bb891e48b35edfea5b3286dcb4ab"><code>2ad9e07</code></a> <code>assert_equal</code>: fix <code>clippy::default_numeric_fallback</code></li> <li><a href="https://github.com/rust-itertools/itertools/commit/0d4efc84323399b47b09ae9da1ff3fdfc2cf95e1"><code>0d4efc8</code></a> Remove free function <code>get</code></li> <li><a href="https://github.com/rust-itertools/itertools/commit/05cc0ee256e84d665e34209053ebc62ef7e4463d"><code>05cc0ee</code></a> <code>get(s..=usize::MAX)</code> should be fine when <code>s != 0</code></li> <li><a href="https://github.com/rust-itertools/itertools/commit/3c16f14baa5515376adcd8c530f6d3d275b14f44"><code>3c16f14</code></a> <code>get</code>: when is it ESI and/or DEI</li> <li><a href="https://github.com/rust-itertools/itertools/commit/4dd6ba0e7c44bb287dff1098d8fb6ab77c32bf87"><code>4dd6ba0</code></a> <code>get</code>: panics if the range includes <code>usize::MAX</code></li> <li><a href="https://github.com/rust-itertools/itertools/commit/7a9ce56fc59489668178d696db76afb3580a359c"><code>7a9ce56</code></a> <code>get(r: Range)</code> as <code>Skip\&lt;Take&gt;</code></li> <li><a href="https://github.com/rust-itertools/itertools/commit/f676f2f96451220c827c62f714d79ce6454d0184"><code>f676f2f</code></a> Remove the unspecified check about <code>.get(exhausted_range_inclusive)</code></li> <li>Additional commits viewable in <a href="https://github.com/rust-itertools/itertools/compare/v0.12.0...v0.13.0">compare view</a></li> </ul> </details> <br /> Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`. [//]: # (dependabot-automerge-start) [//]: # (dependabot-automerge-end) --- <details> <summary>Dependabot commands and options</summary> <br /> You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot show <dependency name> ignore conditions` will show all of the ignore conditions of the specified dependency - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself) </details> Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-05-27 16:35:52 +00:00
itertools = { version = "0.13", optional = true }
Per-meshlet compressed vertex data (#15643) # Objective - Prepare for streaming by storing vertex data per-meshlet, rather than per-mesh (this means duplicating vertices per-meshlet) - Compress vertex data to reduce the cost of this ## Solution The important parts are in from_mesh.rs, the changes to the Meshlet type in asset.rs, and the changes in meshlet_bindings.wgsl. Everything else is pretty secondary/boilerplate/straightforward changes. - Positions are quantized in centimeters with a user-provided power of 2 factor (ideally auto-determined, but that's a TODO for the future), encoded as an offset relative to the minimum value within the meshlet, and then stored as a packed list of bits using the minimum number of bits needed for each vertex position channel for that meshlet - E.g. quantize positions (lossly, throws away precision that's not needed leading to using less bits in the bitstream encoding) - Get the min/max quantized value of each X/Y/Z channel of the quantized positions within a meshlet - Encode values relative to the min value of the meshlet. E.g. convert from [min, max] to [0, max - min] - The new max value in the meshlet is (max - min), which only takes N bits, so we only need N bits to store each channel within the meshlet (lossless) - We can store the min value and that it takes N bits per channel in the meshlet metadata, and reconstruct the position from the bitstream - Normals are octahedral encoded and than snorm2x16 packed and stored as a single u32. - Would be better to implement the precise variant of octhedral encoding for extra precision (no extra decode cost), but decided to keep it simple for now and leave that as a followup - Tried doing a quantizing and bitstream encoding scheme like I did for positions, but struggled to get it smaller. Decided to go with this for simplicity for now - UVs are uncompressed and take a full 64bits per vertex which is expensive - In the future this should be improved - Tangents, as of the previous PR, are not explicitly stored and are instead derived from screen space gradients - While I'm here, split up MeshletMeshSaverLoader into two separate types Other future changes include implementing a smaller encoding of triangle data (3 u8 indices = 24 bits per triangle currently), and more disk-oriented compression schemes. References: * "A Deep Dive into UE5's Nanite Virtualized Geometry" https://advances.realtimerendering.com/s2021/Karis_Nanite_SIGGRAPH_Advances_2021_final.pdf#page=128 (also available on youtube) * "Towards Practical Meshlet Compression" https://arxiv.org/pdf/2404.06359 * "Vertex quantization in Omniforce Game Engine" https://daniilvinn.github.io/2024/05/04/omniforce-vertex-quantization.html ## Testing - Did you test these changes? If so, how? - Converted the stanford bunny, and rendered it with a debug material showing normals, and confirmed that it's identical to what's on main. EDIT: See additional testing in the comments below. - Are there any parts that need more testing? - Could use some more size comparisons on various meshes, and testing different quantization factors. Not sure if 4 is a good default. EDIT: See additional testing in the comments below. - Also did not test runtime performance of the shaders. EDIT: See additional testing in the comments below. - How can other people (reviewers) test your changes? Is there anything specific they need to know? - Use my unholy script, replacing the meshlet example https://paste.rs/7xQHk.rs (must make MeshletMesh fields pub instead of pub crate, must add lz4_flex as a dev-dependency) (must compile with meshlet and meshlet_processor features, mesh must have only positions, normals, and UVs, no vertex colors or tangents) --- ## Migration Guide - TBD by JMS55 at the end of the release
2024-10-08 18:42:55 +00:00
bitvec = { version = "1", optional = true }
# direct dependency required for derive macro
bytemuck = { version = "1", features = ["derive", "must_cast"] }
Use GpuArrayBuffer for MeshUniform (#9254) # Objective - Reduce the number of rebindings to enable batching of draw commands ## Solution - Use the new `GpuArrayBuffer` for `MeshUniform` data to store all `MeshUniform` data in arrays within fewer bindings - Sort opaque/alpha mask prepass, opaque/alpha mask main, and shadow phases also by the batch per-object data binding dynamic offset to improve performance on WebGL2. --- ## Changelog - Changed: Per-object `MeshUniform` data is now managed by `GpuArrayBuffer` as arrays in buffers that need to be indexed into. ## Migration Guide Accessing the `model` member of an individual mesh object's shader `Mesh` struct the old way where each `MeshUniform` was stored at its own dynamic offset: ```rust struct Vertex { @location(0) position: vec3<f32>, }; fn vertex(vertex: Vertex) -> VertexOutput { var out: VertexOutput; out.clip_position = mesh_position_local_to_clip( mesh.model, vec4<f32>(vertex.position, 1.0) ); return out; } ``` The new way where one needs to index into the array of `Mesh`es for the batch: ```rust struct Vertex { @builtin(instance_index) instance_index: u32, @location(0) position: vec3<f32>, }; fn vertex(vertex: Vertex) -> VertexOutput { var out: VertexOutput; out.clip_position = mesh_position_local_to_clip( mesh[vertex.instance_index].model, vec4<f32>(vertex.position, 1.0) ); return out; } ``` Note that using the instance_index is the default way to pass the per-object index into the shader, but if you wish to do custom rendering approaches you can pass it in however you like. --------- Co-authored-by: robtfm <50659922+robtfm@users.noreply.github.com> Co-authored-by: Elabajaba <Elabajaba@users.noreply.github.com>
2023-07-30 13:17:08 +00:00
radsort = "0.1"
Implement minimal reflection probes (fixed macOS, iOS, and Android). (#11366) This pull request re-submits #10057, which was backed out for breaking macOS, iOS, and Android. I've tested this version on macOS and Android and on the iOS simulator. # Objective This pull request implements *reflection probes*, which generalize environment maps to allow for multiple environment maps in the same scene, each of which has an axis-aligned bounding box. This is a standard feature of physically-based renderers and was inspired by [the corresponding feature in Blender's Eevee renderer]. ## Solution This is a minimal implementation of reflection probes that allows artists to define cuboid bounding regions associated with environment maps. For every view, on every frame, a system builds up a list of the nearest 4 reflection probes that are within the view's frustum and supplies that list to the shader. The PBR fragment shader searches through the list, finds the first containing reflection probe, and uses it for indirect lighting, falling back to the view's environment map if none is found. Both forward and deferred renderers are fully supported. A reflection probe is an entity with a pair of components, *LightProbe* and *EnvironmentMapLight* (as well as the standard *SpatialBundle*, to position it in the world). The *LightProbe* component (along with the *Transform*) defines the bounding region, while the *EnvironmentMapLight* component specifies the associated diffuse and specular cubemaps. A frequent question is "why two components instead of just one?" The advantages of this setup are: 1. It's readily extensible to other types of light probes, in particular *irradiance volumes* (also known as ambient cubes or voxel global illumination), which use the same approach of bounding cuboids. With a single component that applies to both reflection probes and irradiance volumes, we can share the logic that implements falloff and blending between multiple light probes between both of those features. 2. It reduces duplication between the existing *EnvironmentMapLight* and these new reflection probes. Systems can treat environment maps attached to cameras the same way they treat environment maps applied to reflection probes if they wish. Internally, we gather up all environment maps in the scene and place them in a cubemap array. At present, this means that all environment maps must have the same size, mipmap count, and texture format. A warning is emitted if this restriction is violated. We could potentially relax this in the future as part of the automatic mipmap generation work, which could easily do texture format conversion as part of its preprocessing. An easy way to generate reflection probe cubemaps is to bake them in Blender and use the `export-blender-gi` tool that's part of the [`bevy-baked-gi`] project. This tool takes a `.blend` file containing baked cubemaps as input and exports cubemap images, pre-filtered with an embedded fork of the [glTF IBL Sampler], alongside a corresponding `.scn.ron` file that the scene spawner can use to recreate the reflection probes. Note that this is intentionally a minimal implementation, to aid reviewability. Known issues are: * Reflection probes are basically unsupported on WebGL 2, because WebGL 2 has no cubemap arrays. (Strictly speaking, you can have precisely one reflection probe in the scene if you have no other cubemaps anywhere, but this isn't very useful.) * Reflection probes have no falloff, so reflections will abruptly change when objects move from one bounding region to another. * As mentioned before, all cubemaps in the world of a given type (diffuse or specular) must have the same size, format, and mipmap count. Future work includes: * Blending between multiple reflection probes. * A falloff/fade-out region so that reflected objects disappear gradually instead of vanishing all at once. * Irradiance volumes for voxel-based global illumination. This should reuse much of the reflection probe logic, as they're both GI techniques based on cuboid bounding regions. * Support for WebGL 2, by breaking batches when reflection probes are used. These issues notwithstanding, I think it's best to land this with roughly the current set of functionality, because this patch is useful as is and adding everything above would make the pull request significantly larger and harder to review. --- ## Changelog ### Added * A new *LightProbe* component is available that specifies a bounding region that an *EnvironmentMapLight* applies to. The combination of a *LightProbe* and an *EnvironmentMapLight* offers *reflection probe* functionality similar to that available in other engines. [the corresponding feature in Blender's Eevee renderer]: https://docs.blender.org/manual/en/latest/render/eevee/light_probes/reflection_cubemaps.html [`bevy-baked-gi`]: https://github.com/pcwalton/bevy-baked-gi [glTF IBL Sampler]: https://github.com/KhronosGroup/glTF-IBL-Sampler
2024-01-19 07:33:52 +00:00
smallvec = "1.6"
nonmax = "0.5"
Micro-optimize `queue_material_meshes`, primarily to remove bit manipulation. (#12791) This commit makes the following optimizations: ## `MeshPipelineKey`/`BaseMeshPipelineKey` split `MeshPipelineKey` has been split into `BaseMeshPipelineKey`, which lives in `bevy_render` and `MeshPipelineKey`, which lives in `bevy_pbr`. Conceptually, `BaseMeshPipelineKey` is a superclass of `MeshPipelineKey`. For `BaseMeshPipelineKey`, the bits start at the highest (most significant) bit and grow downward toward the lowest bit; for `MeshPipelineKey`, the bits start at the lowest bit and grow upward toward the highest bit. This prevents them from colliding. The goal of this is to avoid having to reassemble bits of the pipeline key for every mesh every frame. Instead, we can just use a bitwise or operation to combine the pieces that make up a `MeshPipelineKey`. ## `specialize_slow` Previously, all of `specialize()` was marked as `#[inline]`. This bloated `queue_material_meshes` unnecessarily, as a large chunk of it ended up being a slow path that was rarely hit. This commit refactors the function to move the slow path to `specialize_slow()`. Together, these two changes shave about 5% off `queue_material_meshes`: ![Screenshot 2024-03-29 130002](https://github.com/bevyengine/bevy/assets/157897/a7e5a994-a807-4328-b314-9003429dcdd2) ## Migration Guide - The `primitive_topology` field on `GpuMesh` is now an accessor method: `GpuMesh::primitive_topology()`. - For performance reasons, `MeshPipelineKey` has been split into `BaseMeshPipelineKey`, which lives in `bevy_render`, and `MeshPipelineKey`, which lives in `bevy_pbr`. These two should be combined with bitwise-or to produce the final `MeshPipelineKey`.
2024-04-01 21:58:53 +00:00
static_assertions = "1"
[lints]
workspace = true
[package.metadata.docs.rs]
rustdoc-args = ["-Zunstable-options", "--generate-link-to-definition"]
all-features = true