Commit graph

479 commits

Author SHA1 Message Date
Joona Aalto
de888a373d
Migrate lights to required components (#15554)
# Objective

Another step in the migration to required components: lights!

Note that this does not include `EnvironmentMapLight` or reflection
probes yet, because their API hasn't been fully chosen yet.

## Solution

As per the [selected
proposals](https://hackmd.io/@bevy/required_components/%2FLLnzwz9XTxiD7i2jiUXkJg):

- Deprecate `PointLightBundle` in favor of the `PointLight` component
- Deprecate `SpotLightBundle` in favor of the `PointLight` component
- Deprecate `DirectionalLightBundle` in favor of the `DirectionalLight`
component

## Testing

I ran some examples with lights.

---

## Migration Guide

`PointLightBundle`, `SpotLightBundle`, and `DirectionalLightBundle` have
been deprecated. Use the `PointLight`, `SpotLight`, and
`DirectionalLight` components instead. Adding them will now insert the
other components required by them automatically.
2024-10-01 03:20:43 +00:00
Trashtalk217
56f8e526dd
The Cooler 'Retain Rendering World' (#15320)
- Adopted from #14449
- Still fixes #12144.

## Migration Guide

The retained render world is a complex change: migrating might take one
of a few different forms depending on the patterns you're using.

For every example, we specify in which world the code is run. Most of
the changes affect render world code, so for the average Bevy user who's
using Bevy's high-level rendering APIs, these changes are unlikely to
affect your code.

### Spawning entities in the render world

Previously, if you spawned an entity with `world.spawn(...)`,
`commands.spawn(...)` or some other method in the rendering world, it
would be despawned at the end of each frame. In 0.15, this is no longer
the case and so your old code could leak entities. This can be mitigated
by either re-architecting your code to no longer continuously spawn
entities (like you're used to in the main world), or by adding the
`bevy_render::world_sync::TemporaryRenderEntity` component to the entity
you're spawning. Entities tagged with `TemporaryRenderEntity` will be
removed at the end of each frame (like before).

### Extract components with `ExtractComponentPlugin`

```
// main world
app.add_plugins(ExtractComponentPlugin::<ComponentToExtract>::default());
```

`ExtractComponentPlugin` has been changed to only work with synced
entities. Entities are automatically synced if `ComponentToExtract` is
added to them. However, entities are not "unsynced" if any given
`ComponentToExtract` is removed, because an entity may have multiple
components to extract. This would cause the other components to no
longer get extracted because the entity is not synced.

So be careful when only removing extracted components from entities in
the render world, because it might leave an entity behind in the render
world. The solution here is to avoid only removing extracted components
and instead despawn the entire entity.

### Manual extraction using `Extract<Query<(Entity, ...)>>`

```rust
// in render world, inspired by bevy_pbr/src/cluster/mod.rs
pub fn extract_clusters(
    mut commands: Commands,
    views: Extract<Query<(Entity, &Clusters, &Camera)>>,
) {
    for (entity, clusters, camera) in &views {
        // some code
        commands.get_or_spawn(entity).insert(...);
    }
}
```
One of the primary consequences of the retained rendering world is that
there's no longer a one-to-one mapping from entity IDs in the main world
to entity IDs in the render world. Unlike in Bevy 0.14, Entity 42 in the
main world doesn't necessarily map to entity 42 in the render world.

Previous code which called `get_or_spawn(main_world_entity)` in the
render world (`Extract<Query<(Entity, ...)>>` returns main world
entities). Instead, you should use `&RenderEntity` and
`render_entity.id()` to get the correct entity in the render world. Note
that this entity does need to be synced first in order to have a
`RenderEntity`.

When performing manual abstraction, this won't happen automatically
(like with `ExtractComponentPlugin`) so add a `SyncToRenderWorld` marker
component to the entities you want to extract.

This results in the following code:
```rust
// in render world, inspired by bevy_pbr/src/cluster/mod.rs
pub fn extract_clusters(
    mut commands: Commands,
    views: Extract<Query<(&RenderEntity, &Clusters, &Camera)>>,
) {
    for (render_entity, clusters, camera) in &views {
        // some code
        commands.get_or_spawn(render_entity.id()).insert(...);
    }
}

// in main world, when spawning
world.spawn(Clusters::default(), Camera::default(), SyncToRenderWorld)
```

### Looking up `Entity` ids in the render world

As previously stated, there's now no correspondence between main world
and render world `Entity` identifiers.

Querying for `Entity` in the render world will return the `Entity` id in
the render world: query for `MainEntity` (and use its `id()` method) to
get the corresponding entity in the main world.

This is also a good way to tell the difference between synced and
unsynced entities in the render world, because unsynced entities won't
have a `MainEntity` component.

---------

Co-authored-by: re0312 <re0312@outlook.com>
Co-authored-by: re0312 <45868716+re0312@users.noreply.github.com>
Co-authored-by: Periwink <charlesbour@gmail.com>
Co-authored-by: Anselmo Sampietro <ans.samp@gmail.com>
Co-authored-by: Emerson Coskey <56370779+ecoskey@users.noreply.github.com>
Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
Co-authored-by: Christian Hughes <9044780+ItsDoot@users.noreply.github.com>
2024-09-30 18:51:43 +00:00
ickshonpe
c5742ff43e
Simplified ui_stack_system (#9889)
# Objective

`ui_stack_system` generates a tree of `StackingContexts` which it then
flattens to get the `UiStack`.

But there's no need to construct a new tree. We can query for nodes with
a global `ZIndex`, add those nodes to the root nodes list and then build
the `UiStack` from a walk of the existing layout tree, ignoring any
branches that have a global `Zindex`.

Fixes #9877

## Solution

Split the `ZIndex` enum into two separate components, `ZIndex` and
`GlobalZIndex`

Query for nodes with a `GlobalZIndex`, add those nodes to the root nodes
list and then build the `UiStack` from a walk of the existing layout
tree, filtering branches by `Without<GlobalZIndex>` so we don't revisit
nodes.

```
cargo run --profile stress-test --features trace_tracy --example many_buttons
```

<img width="672" alt="ui-stack-system-walk-split-enum"
src="https://github.com/bevyengine/bevy/assets/27962798/11e357a5-477f-4804-8ada-c4527c009421">

(Yellow is this PR, red is main)

---

## Changelog
`Zindex`
* The `ZIndex` enum has been split into two separate components `ZIndex`
(which replaces `ZIndex::Local`) and `GlobalZIndex` (which replaces
`ZIndex::Global`). An entity can have both a `ZIndex` and
`GlobalZIndex`, in comparisons `ZIndex` breaks ties if two
`GlobalZIndex` values are equal.

`ui_stack_system`
* Instead of generating a tree of `StackingContexts`, query for nodes
with a `GlobalZIndex`, add those nodes to the root nodes list and then
build the `UiStack` from a walk of the existing layout tree, filtering
branches by `Without<GlobalZIndex` so we don't revisit nodes.

## Migration Guide

The `ZIndex` enum has been split into two separate components `ZIndex`
(which replaces `ZIndex::Local`) and `GlobalZIndex` (which replaces
`ZIndex::Global`). An entity can have both a `ZIndex` and
`GlobalZIndex`, in comparisons `ZIndex` breaks ties if two
`GlobalZindex` values are equal.

---------

Co-authored-by: Gabriel Bourgeois <gabriel.bourgeoisv4si@gmail.com>
Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
Co-authored-by: UkoeHB <37489173+UkoeHB@users.noreply.github.com>
2024-09-30 18:43:57 +00:00
Sou1gh0st
78a3aae81b
feat(gltf): add name component to gltf mesh primitive (#13912)
# Objective

- fixes https://github.com/bevyengine/bevy/issues/13473

## Solution

- When a single mesh is assigned multiple materials, it is divided into
several primitive nodes, with each primitive assigned a unique material.
Presently, these primitives are named using the format Mesh.index, which
complicates querying. To improve this, we can assign a specific name to
each primitive based on the material’s name, since each primitive
corresponds to one material exclusively.

## Testing

- I have included a simple example which shows how to query a material
and mesh part based on the new name component.

## Changelog
- adds `GltfMaterialName` component to the mesh entity of the gltf
primitive node.

---------

Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
2024-09-30 16:51:52 +00:00
Sou1gh0st
39d96ef0fd
Implement volumetric fog support for both point lights and spotlights (#15361)
# Objective
- Fixes: https://github.com/bevyengine/bevy/issues/14451

## Solution
- Adding volumetric fog sampling code for both point lights and
spotlights.

## Testing
- I have modified the example of volumetric_fog.rs by adding a
volumetric point light and a volumetric spotlight.


https://github.com/user-attachments/assets/3eeb77a0-f22d-40a6-a48a-2dd75d55a877
2024-09-29 21:30:53 +00:00
JMS55
9cc7e7c080
Meshlet screenspace-derived tangents (#15084)
* Save 16 bytes per vertex by calculating tangents in the shader at
runtime, rather than storing them in the vertex data.
* Based on https://jcgt.org/published/0009/03/04,
https://www.jeremyong.com/graphics/2023/12/16/surface-gradient-bump-mapping.
* Fixed visbuffer resolve to use the updated algorithm that flips ddy
correctly
* Added some more docs about meshlet material limitations, and some
TODOs about transforming UV coordinates for the future.


![image](https://github.com/user-attachments/assets/222d8192-8c82-4d77-945d-53670a503761)

For testing add a normal map to the bunnies with StandardMaterial like
below, and then test that on both main and this PR (make sure to
download the correct bunny for each). Results should be mostly
identical.

```rust
normal_map_texture: Some(asset_server.load_with_settings(
    "textures/BlueNoise-Normal.png",
    |settings: &mut ImageLoaderSettings| settings.is_srgb = false,
)),
```
2024-09-29 18:39:25 +00:00
Benjamin Brienen
e34a56c963
Better info message (#15432)
# Objective

Fixes the confusion that caused #5660

## Solution

Make it clear that it is the hardware which doesn't support the format
and not bevy's fault.
2024-09-26 13:32:33 +00:00
Clar Fon
efda7f3f9c
Simpler lint fixes: makes ci lints work but disables a lint for now (#15376)
Takes the first two commits from #15375 and adds suggestions from this
comment:
https://github.com/bevyengine/bevy/pull/15375#issuecomment-2366968300

See #15375 for more reasoning/motivation.

## Rebasing (rerunning)

```rust
git switch simpler-lint-fixes
git reset --hard main
cargo fmt --all -- --unstable-features --config normalize_comments=true,imports_granularity=Crate
cargo fmt --all
git add --update
git commit --message "rustfmt"
cargo clippy --workspace --all-targets --all-features --fix
cargo fmt --all -- --unstable-features --config normalize_comments=true,imports_granularity=Crate
cargo fmt --all
git add --update
git commit --message "clippy"
git cherry-pick e6c0b94f6795222310fb812fa5c4512661fc7887
```
2024-09-24 11:42:59 +00:00
Zachary Harrold
4742f74fc4
Broaden "Check for bevy_internal imports" CI Task (#15333)
# Objective

- Fixes #15319
- Fixes #15317

## Solution

- Updated CI task to check for _any_ `bevy_*` crates, rather than just
`bevy_internal`

---------

Co-authored-by: François Mockers <francois.mockers@vleue.com>
2024-09-20 17:08:37 +00:00
Wybe Westra
13ca08f32d
Add ASCII art to custom mesh example (#15261) (#15266)
Added ASCII art to the custom mesh example, to clarify the ordering of
the triangle indices.
Fixes #15261.
2024-09-19 21:47:32 +00:00
Patrick Walton
2ae5a21009
Implement percentage-closer soft shadows (PCSS). (#13497)
[*Percentage-closer soft shadows*] are a technique from 2004 that allow
shadows to become blurrier farther from the objects that cast them. It
works by introducing a *blocker search* step that runs before the normal
shadow map sampling. The blocker search step detects the difference
between the depth of the fragment being rasterized and the depth of the
nearby samples in the depth buffer. Larger depth differences result in a
larger penumbra and therefore a blurrier shadow.

To enable PCSS, fill in the `soft_shadow_size` value in
`DirectionalLight`, `PointLight`, or `SpotLight`, as appropriate. This
shadow size value represents the size of the light and should be tuned
as appropriate for your scene. Higher values result in a wider penumbra
(i.e. blurrier shadows).

When using PCSS, temporal shadow maps
(`ShadowFilteringMethod::Temporal`) are recommended. If you don't use
`ShadowFilteringMethod::Temporal` and instead use
`ShadowFilteringMethod::Gaussian`, Bevy will use the same technique as
`Temporal`, but the result won't vary over time. This produces a rather
noisy result. Doing better would likely require downsampling the shadow
map, which would be complex and slower (and would require PR #13003 to
land first).

In addition to PCSS, this commit makes the near Z plane for the shadow
map configurable on a per-light basis. Previously, it had been hardcoded
to 0.1 meters. This change was necessary to make the point light shadow
map in the example look reasonable, as otherwise the shadows appeared
far too aliased.

A new example, `pcss`, has been added. It demonstrates the
percentage-closer soft shadow technique with directional lights, point
lights, spot lights, non-temporal operation, and temporal operation. The
assets are my original work.

Both temporal and non-temporal shadows are rather noisy in the example,
and, as mentioned before, this is unavoidable without downsampling the
depth buffer, which we can't do yet. Note also that the shadows don't
look particularly great for point lights; the example simply isn't an
ideal scene for them. Nevertheless, I felt that the benefits of the
ability to do a side-by-side comparison of directional and point lights
outweighed the unsightliness of the point light shadows in that example,
so I kept the point light feature in.

Fixes #3631.

[*Percentage-closer soft shadows*]:
https://developer.download.nvidia.com/shaderlibrary/docs/shadow_PCSS.pdf

## Changelog

### Added

* Percentage-closer soft shadows (PCSS) are now supported, allowing
shadows to become blurrier as they stretch away from objects. To use
them, set the `soft_shadow_size` field in `DirectionalLight`,
`PointLight`, or `SpotLight`, as applicable.

* The near Z value for shadow maps is now customizable via the
`shadow_map_near_z` field in `DirectionalLight`, `PointLight`, and
`SpotLight`.

## Screenshots

PCSS off:
![Screenshot 2024-05-24
120012](https://github.com/bevyengine/bevy/assets/157897/0d35fe98-245b-44fb-8a43-8d0272a73b86)

PCSS on:
![Screenshot 2024-05-24
115959](https://github.com/bevyengine/bevy/assets/157897/83397ef8-1317-49dd-bfb3-f8286d7610cd)

---------

Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
Co-authored-by: Torstein Grindvik <52322338+torsteingrindvik@users.noreply.github.com>
2024-09-18 18:07:17 +00:00
Benjamin Brienen
29508f065f
Fix floating point math (#15239)
# Objective

- Fixes #15236

## Solution

- Use bevy_math::ops instead of std floating point operations.

## Testing

- Did you test these changes? If so, how?
Unit tests and `cargo run -p ci -- test`

- How can other people (reviewers) test your changes? Is there anything
specific they need to know?
Execute `cargo run -p ci -- test` on Windows.

- If relevant, what platforms did you test these changes on, and are
there any important ones you can't test?
Windows

## Migration Guide

- Not a breaking change
- Projects should use bevy math where applicable

---------

Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
Co-authored-by: IQuick 143 <IQuick143cz@gmail.com>
Co-authored-by: Joona Aalto <jondolf.dev@gmail.com>
2024-09-16 23:28:12 +00:00
Tero Laxström
522d82b21a
Fixing text sizes for examples (#15190)
# Objective

- Fixes #14265

## Solution

- Go through Pixel Eagle examples (and examples all in all)
- If default size is used it is usually left there
- If size of font is touched try dividing with 1.2 and round it to
nearest whole number

## Testing

- Run example before and after
- Make sure examples text are readable or like before cosmic-text change

---

## Showcase

Before:

![image](https://github.com/user-attachments/assets/beb2d5af-d1ee-4c2c-89c4-8e59c53b53b4)

After:

![image](https://github.com/user-attachments/assets/fef28a8d-dc26-4e0e-9870-6b216de906e8)
2024-09-16 23:14:37 +00:00
Joona Aalto
afbbbd7335
Rename rendering components for improved consistency and clarity (#15035)
# Objective

The names of numerous rendering components in Bevy are inconsistent and
a bit confusing. Relevant names include:

- `AutoExposureSettings`
- `AutoExposureSettingsUniform`
- `BloomSettings`
- `BloomUniform` (no `Settings`)
- `BloomPrefilterSettings`
- `ChromaticAberration` (no `Settings`)
- `ContrastAdaptiveSharpeningSettings`
- `DepthOfFieldSettings`
- `DepthOfFieldUniform` (no `Settings`)
- `FogSettings`
- `SmaaSettings`, `Fxaa`, `TemporalAntiAliasSettings` (really
inconsistent??)
- `ScreenSpaceAmbientOcclusionSettings`
- `ScreenSpaceReflectionsSettings`
- `VolumetricFogSettings`

Firstly, there's a lot of inconsistency between `Foo`/`FooSettings` and
`FooUniform`/`FooSettingsUniform` and whether names are abbreviated or
not.

Secondly, the `Settings` post-fix seems unnecessary and a bit confusing
semantically, since it makes it seem like the component is mostly just
auxiliary configuration instead of the core *thing* that actually
enables the feature. This will be an even bigger problem once bundles
like `TemporalAntiAliasBundle` are deprecated in favor of required
components, as users will expect a component named `TemporalAntiAlias`
(or similar), not `TemporalAntiAliasSettings`.

## Solution

Drop the `Settings` post-fix from the component names, and change some
names to be more consistent.

- `AutoExposure`
- `AutoExposureUniform`
- `Bloom`
- `BloomUniform`
- `BloomPrefilter`
- `ChromaticAberration`
- `ContrastAdaptiveSharpening`
- `DepthOfField`
- `DepthOfFieldUniform`
- `DistanceFog`
- `Smaa`, `Fxaa`, `TemporalAntiAliasing` (note: we might want to change
to `Taa`, see "Discussion")
- `ScreenSpaceAmbientOcclusion`
- `ScreenSpaceReflections`
- `VolumetricFog`

I kept the old names as deprecated type aliases to make migration a bit
less painful for users. We should remove them after the next release.
(And let me know if I should just... not add them at all)

I also added some very basic docs for a few types where they were
missing, like on `Fxaa` and `DepthOfField`.

## Discussion

- `TemporalAntiAliasing` is still inconsistent with `Smaa` and `Fxaa`.
Consensus [on
Discord](https://discord.com/channels/691052431525675048/743663924229963868/1280601167209955431)
seemed to be that renaming to `Taa` would probably be fine, but I think
it's a bit more controversial, and it would've required renaming a lot
of related types like `TemporalAntiAliasNode`,
`TemporalAntiAliasBundle`, and `TemporalAntiAliasPlugin`, so I think
it's better to leave to a follow-up.
- I think `Fog` should probably have a more specific name like
`DistanceFog` considering it seems to be distinct from `VolumetricFog`.
~~This should probably be done in a follow-up though, so I just removed
the `Settings` post-fix for now.~~ (done)

---

## Migration Guide

Many rendering components have been renamed for improved consistency and
clarity.

- `AutoExposureSettings` → `AutoExposure`
- `BloomSettings` → `Bloom`
- `BloomPrefilterSettings` → `BloomPrefilter`
- `ContrastAdaptiveSharpeningSettings` → `ContrastAdaptiveSharpening`
- `DepthOfFieldSettings` → `DepthOfField`
- `FogSettings` → `DistanceFog`
- `SmaaSettings` → `Smaa`
- `TemporalAntiAliasSettings` → `TemporalAntiAliasing`
- `ScreenSpaceAmbientOcclusionSettings` → `ScreenSpaceAmbientOcclusion`
- `ScreenSpaceReflectionsSettings` → `ScreenSpaceReflections`
- `VolumetricFogSettings` → `VolumetricFog`

---------

Co-authored-by: Carter Anderson <mcanders1@gmail.com>
2024-09-10 01:11:46 +00:00
Rich Churcher
f326705cab
Remove OrthographicProjection.scale (adopted) (#15075)
# Objective

Hello! I am adopting #11022 to resolve conflicts with `main`. tldr: this
removes `scale` in favour of `scaling_mode`. Please see the original PR
for explanation/discussion.

Also relates to #2580.

## Migration Guide

Replace all uses of `scale` with `scaling_mode`, keeping in mind that
`scale` is (was) a multiplier. For example, replace
```rust
    scale: 2.0,
    scaling_mode: ScalingMode::FixedHorizontal(4.0),

```
with
```rust
    scaling_mode: ScalingMode::FixedHorizontal(8.0),
```

---------

Co-authored-by: Stepan Koltsov <stepan.koltsov@gmail.com>
2024-09-09 22:34:58 +00:00
Alix Bott
82e416dc48
Split OrthographicProjection::default into 2d & 3d (Adopted) (#15073)
Adopted PR from dmlary, all credit to them!
https://github.com/bevyengine/bevy/pull/9915

Original description:

# Objective

The default value for `near` in `OrthographicProjection` should be
different for 2d & 3d.

For 2d using `near = -1000` allows bevy users to build up scenes using
background `z = 0`, and foreground elements `z > 0` similar to css.
However in 3d `near = -1000` results in objects behind the camera being
rendered. Using `near = 0` works for 3d, but forces 2d users to assign
`z <= 0` for rendered elements, putting the background at some arbitrary
negative value.

There is no common value for `near` that doesn't result in a footgun or
usability issue for either 2d or 3d, so they should have separate
values.

There was discussion about other options in the discord
[0](https://discord.com/channels/691052431525675048/1154114310042292325),
but splitting `default()` into `default_2d()` and `default_3d()` seemed
like the lowest cost approach.

Related/past work https://github.com/bevyengine/bevy/issues/9138,
https://github.com/bevyengine/bevy/pull/9214,
https://github.com/bevyengine/bevy/pull/9310,
https://github.com/bevyengine/bevy/pull/9537 (thanks to @Selene-Amanita
for the list)

## Solution

This commit splits `OrthographicProjection::default` into `default_2d`
and `default_3d`.

## Migration Guide

- In initialization of `OrthographicProjection`, change `..default()` to
`..OrthographicProjection::default_2d()` or
`..OrthographicProjection::default_3d()`

Example:
```diff
--- a/examples/3d/orthographic.rs
+++ b/examples/3d/orthographic.rs
@@ -20,7 +20,7 @@ fn setup(
         projection: OrthographicProjection {
             scale: 3.0,
             scaling_mode: ScalingMode::FixedVertical(2.0),
-            ..default()
+            ..OrthographicProjection::default_3d()
         }
         .into(),
         transform: Transform::from_xyz(5.0, 5.0, 5.0).looking_at(Vec3::ZERO, Vec3::Y),
```

---------

Co-authored-by: David M. Lary <dmlary@gmail.com>
Co-authored-by: Jan Hohenheim <jan@hohenheim.ch>
2024-09-09 15:51:28 +00:00
JMS55
a0faf9cd01
More triangles/vertices per meshlet (#15023)
### Builder changes
- Increased meshlet max vertices/triangles from 64v/64t to 255v/128t
(meshoptimizer won't allow 256v sadly). This gives us a much greater
percentage of meshlets with max triangle count (128). Still not perfect,
we still end up with some tiny <=10 triangle meshlets that never really
get simplified, but it's progress.
- Removed the error target limit. Now we allow meshoptimizer to simplify
as much as possible. No reason to cap this out, as the cluster culling
code will choose a good LOD level anyways. Again leads to higher quality
LOD trees.
- After some discussion and consulting the Nanite slides again, changed
meshlet group error from _adding_ the max child's error to the group
error, to doing `group_error = max(group_error, max_child_error)`. Error
is already cumulative between LODs as the edges we're collapsing during
simplification get longer each time.
- Bumped the 65% simplification threshold to allow up to 95% of the
original geometry (e.g. accept simplification as valid even if we only
simplified 5% of the triangles). This gives us closer to
log2(initial_meshlet_count) LOD levels, and fewer meshlet roots in the
DAG.

Still more work to be done in the future here. Maybe trying METIS for
meshlet building instead of meshoptimizer.

Using ~8 clusters per group instead of ~4 might also make a big
difference. The Nanite slides say that they have 8-32 meshlets per
group, suggesting some kind of heuristic. Unfortunately meshopt's
compute_cluster_bounds won't work with large groups atm
(https://github.com/zeux/meshoptimizer/discussions/750#discussioncomment-10562641)
so hard to test.

Based on discussion from
https://github.com/bevyengine/bevy/discussions/14998,
https://github.com/zeux/meshoptimizer/discussions/750, and discord.

### Runtime changes
- cluster:triangle packed IDs are now stored 25:7 instead of 26:6 bits,
as max triangles per cluster are now 128 instead of 64
- Hardware raster now spawns 128 * 3 vertices instead of 64 * 3 vertices
to account for the new max triangles limit
- Hardware raster now outputs NaN triangles (0 / 0) instead of
zero-positioned triangles for extra vertex invocations over the cluster
triangle count. Shouldn't really be a difference idt, but I did it
anyways.
- Software raster now does 128 threads per workgroup instead of 64
threads. Each thread now loads, projects, and caches a vertex (vertices
0-127), and then if needed does so again (vertices 128-254). Each thread
then rasterizes one of 128 triangles.
- Fixed a bug with `needs_dispatch_remap`. I had the condition backwards
in my last PR, I probably committed it by accident after testing the
non-default code path on my GPU.
2024-09-08 17:55:57 +00:00
Hamir Mahal
ec728c31c1
style: simplify string formatting for readability (#15033)
# Objective

The goal of this change is to improve code readability and
maintainability.
2024-09-03 23:35:49 +00:00
Chris Juchem
c620eb7833
Return Results from Camera's world/viewport conversion methods (#14989)
# Objective

- Fixes https://github.com/bevyengine/bevy/issues/14593.

## Solution

- Add `ViewportConversionError` and return it from viewport conversion
methods on Camera.

## Testing

- I successfully compiled and ran all changed examples.

## Migration Guide

The following methods on `Camera` now return a `Result` instead of an
`Option` so that they can provide more information about failures:
 - `world_to_viewport`
 - `world_to_viewport_with_depth`
 - `viewport_to_world`
 - `viewport_to_world_2d`

Call `.ok()` on the `Result` to turn it back into an `Option`, or handle
the `Result` directly.

---------

Co-authored-by: Lixou <82600264+DasLixou@users.noreply.github.com>
Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
Co-authored-by: Zachary Harrold <zac@harrold.com.au>
2024-09-03 19:45:15 +00:00
Rob Parrett
86d5944b2e
Fix some examples having different instruction text positions (#15017)
# Objective

Thought I had found all of these... noticed some `10px` in #15013 and
did another sweep.

Continuation of #8478, #13583.

## Solution

- Position example text (and other elements) 12px from the edge of the
screen
2024-09-02 22:48:48 +00:00
Robert Walter
8895113784
Use Isometry in bevy_gizmos wherever we can (#14676)
# Objective

- Solves the last bullet in and closes #14319
- Make better use of the `Isometry` types
- Prevent issues like #14655
- Probably simplify and clean up a lot of code through the use of Gizmos
as well (i.e. the 3D gizmos for cylinders circles & lines don't connect
well, probably due to wrong rotations)

## Solution

- go through the `bevy_gizmos` crate and give all methods a slight
workover

## Testing

- For all the changed examples I run `git switch main && cargo rr
--example <X> && git switch <BRANCH> && cargo rr --example <X>` and
compare the visual results
- Check if all doc tests are still compiling
- Check the docs in general and update them !!! 

---

## Migration Guide

The gizmos methods function signature changes as follows:

- 2D
- if it took `position` & `rotation_angle` before ->
`Isometry2d::new(position, Rot2::radians(rotation_angle))`
- if it just took `position` before ->
`Isometry2d::from_translation(position)`
- 3D
- if it took `position` & `rotation` before ->
`Isometry3d::new(position, rotation)`
- if it just took `position` before ->
`Isometry3d::from_translation(position)`
2024-08-28 01:37:19 +00:00
Shane
484721be80
Have EntityCommands methods consume self for easier chaining (#14897)
# Objective

Fixes #14883

## Solution

Pretty simple update to `EntityCommands` methods to consume `self` and
return it rather than taking `&mut self`. The things probably worth
noting:

* I added `#[allow(clippy::should_implement_trait)]` to the `add` method
because it causes a linting conflict with `std::ops::Add`.
* `despawn` and `log_components` now return `Self`. I'm not sure if
that's exactly the desired behavior so I'm happy to adjust if that seems
wrong.

## Testing

Tested with `cargo run -p ci`. I think that should be sufficient to call
things good.

## Migration Guide

The most likely migration needed is changing code from this:

```
        let mut entity = commands.get_or_spawn(entity);

        if depth_prepass {
            entity.insert(DepthPrepass);
        }
        if normal_prepass {
            entity.insert(NormalPrepass);
        }
        if motion_vector_prepass {
            entity.insert(MotionVectorPrepass);
        }
        if deferred_prepass {
            entity.insert(DeferredPrepass);
        }
```

to this:

```
        let mut entity = commands.get_or_spawn(entity);

        if depth_prepass {
            entity = entity.insert(DepthPrepass);
        }
        if normal_prepass {
            entity = entity.insert(NormalPrepass);
        }
        if motion_vector_prepass {
            entity = entity.insert(MotionVectorPrepass);
        }
        if deferred_prepass {
            entity.insert(DeferredPrepass);
        }
```

as can be seen in several of the example code updates here. There will
probably also be instances where mutable `EntityCommands` vars no longer
need to be mutable.
2024-08-26 18:24:59 +00:00
JMS55
6cc96f4c1f
Meshlet software raster + start of cleanup (#14623)
# Objective
- Faster meshlet rasterization path for small triangles
- Avoid having to allocate and write out a triangle buffer
- Refactor gpu_scene.rs

## Solution
- Replace the 32bit visbuffer texture with a 64bit visbuffer buffer,
where the left 32 bits encode depth, and the right 32 bits encode the
existing cluster + triangle IDs. Can't use 64bit textures, wgpu/naga
doesn't support atomic ops on textures yet.
- Instead of writing out a buffer of packed cluster + triangle IDs (per
triangle) to raster, the culling pass now writes out a buffer of just
cluster IDs (per cluster, so less memory allocated, cheaper to write
out).
  - Clusters for software raster are allocated from the left side
- Clusters for hardware raster are allocated in the same buffer, from
the right side
- The buffer size is fixed at MeshletPlugin build time, and should be
set to a reasonable value for your scene (no warning on overflow, and no
good way to determine what value you need outside of renderdoc - I plan
to fix this in a future PR adding a meshlet stats overlay)
- Currently I don't have a heuristic for software vs hardware raster
selection for each cluster. The existing code is just a placeholder. I
need to profile on a release scene and come up with a heuristic,
probably in a future PR.
- The culling shader is getting pretty hard to follow at this point, but
I don't want to spend time improving it as the entire shader/pass is
getting rewritten/replaced in the near future.
- Software raster is a compute workgroup per-cluster. Each workgroup
loads and transforms the <=64 vertices of the cluster, and then
rasterizes the <=64 triangles of the cluster.
- Two variants are implemented: Scanline for clusters with any larger
triangles (still smaller than hardware is good at), and brute-force for
very very tiny triangles
- Once the shader determines that a pixel should be filled in, it does
an atomicMax() on the visbuffer to store the results, copying how Nanite
works
- On devices with a low max workgroups per dispatch limit, an extra
compute pass is inserted before software raster to convert from a 1d to
2d dispatch (I don't think 3d would ever be necessary).
- I haven't implemented the top-left rule or subpixel precision yet, I'm
leaving that for a future PR since I get usable results without it for
now
- Resources used:
https://kristoffer-dyrkorn.github.io/triangle-rasterizer and chapters
6-8 of
https://fgiesen.wordpress.com/2013/02/17/optimizing-sw-occlusion-culling-index
- Hardware raster now spawns 64*3 vertex invocations per meshlet,
instead of the actual meshlet vertex count. Extra invocations just
early-exit.
- While this is slower than the existing system, hardware draws should
be rare now that software raster is usable, and it saves a ton of memory
using the unified cluster ID buffer. This would be fixed if wgpu had
support for mesh shaders.
- Instead of writing to a color+depth attachment, the hardware raster
pass also does the same atomic visbuffer writes that software raster
uses.
- We have to bind a dummy render target anyways, as wgpu doesn't
currently support render passes without any attachments
- Material IDs are no longer written out during the main rasterization
passes.
- If we had async compute queues, we could overlap the software and
hardware raster passes.
- New material and depth resolve passes run at the end of the visbuffer
node, and write out view depth and material ID depth textures

### Misc changes
- Fixed cluster culling importing, but never actually using the previous
view uniforms when doing occlusion culling
- Fixed incorrectly adding the LOD error twice when building the meshlet
mesh
- Splitup gpu_scene module into meshlet_mesh_manager, instance_manager,
and resource_manager
- resource_manager is still too complex and inefficient (extract and
prepare are way too expensive). I plan on improving this in a future PR,
but for now ResourceManager is mostly a 1:1 port of the leftover
MeshletGpuScene bits.
- Material draw passes have been renamed to the more accurate material
shade pass, as well as some other misc renaming (in the future, these
will be compute shaders even, and not actual draw calls)

---

## Migration Guide
- TBD (ask me at the end of the release for meshlet changes as a whole)

---------

Co-authored-by: vero <email@atlasdostal.com>
2024-08-26 17:54:34 +00:00
IceSentry
d46a05e387
Simplify render_to_texture examples (#14855)
# Objective

- The examples use a more verbose than necessary way to initialize the
image
- The order of the camera doesn't need to be specified. At least I
didn't see a difference in my testing

## Solution

- Use `Image::new_fill()` to fill the image instead of abusing
`resize()`
- Remove the camera ordering
2024-08-25 14:15:11 +00:00
Jiří Švejda
3cf70ba4f9
Fix fog density texture offset seam (#14900)
# Objective

- There is a flaw in the implementation of `FogVolume`'s
`density_texture_offset` from #14868. Because of the way I am wrapping
the UVW coordinates in the volumetric fog shader, a seam is visible when
the 3d texture is wrapping around from one side to the other:


![density_texture_offset_seam](https://github.com/user-attachments/assets/89527ef2-5e1b-4b90-8e73-7a3e607697d4)

## Solution

- This PR fixes the issue by removing the wrapping from the shader and
instead leaving it to the user to configure the 3d noise texture to use
`ImageAddressMode::Repeat` if they want it to repeat. Using
`ImageAddressMode::Repeat` is the proper solution to avoid the obvious
seam:


![density_texture_seam_fixed](https://github.com/user-attachments/assets/06e871a6-2db1-4501-b425-4141605f9b26)

- The sampler cannot be implicitly configured to use
`ImageAddressMode::Repeat` because that's not always desirable. For
example, the `fog_volumes` example wouldn't work properly because the
texture from the edges of the volume would overflow to the other sides,
which would be bad in this instance (but it's good in the case of the
`scrolling_fog` example). So leaving it to the user to decide on their
own whether they want the density texture to repeat seems to be the best
solution.

## Testing

- The `scrolling_fog` example still looks the same, it was just changed
to explicitly declare that the density texture should be repeating when
loading the asset. The `fog_volumes` example is unaffected.
<details>
<summary>Minimal reproduction example on current main</summary>
<pre>
use bevy::core_pipeline::experimental::taa::{TemporalAntiAliasBundle,
TemporalAntiAliasPlugin};
use bevy::pbr::{FogVolume, VolumetricFogSettings, VolumetricLight};
use bevy::prelude::*;<br>
fn main() {
    App::new()
        .add_plugins((DefaultPlugins, TemporalAntiAliasPlugin))
        .add_systems(Startup, setup)
        .run();
}<br>
fn setup(mut commands: Commands, assets: Res&lt;AssetServer&gt;) {
    commands.spawn((
        Camera3dBundle {
            transform: Transform::from_xyz(3.5, -1.0, 0.4)
                .looking_at(Vec3::new(0.0, 0.0, 0.4), Vec3::Y),
            msaa: Msaa::Off,
            ..default()
        },
        TemporalAntiAliasBundle::default(),
        VolumetricFogSettings {
            ambient_intensity: 0.0,
            jitter: 0.5,
            ..default()
        },
    ));<br>
    commands.spawn((
        DirectionalLightBundle {
            transform: Transform::from_xyz(-6.0, 5.0, -9.0)
                .looking_at(Vec3::new(0.0, 0.0, 0.0), Vec3::Y),
            directional_light: DirectionalLight {
                illuminance: 32_000.0,
                shadows_enabled: true,
                ..default()
            },
            ..default()
        },
        VolumetricLight,
    ));<br>
    commands.spawn((
        SpatialBundle {
            visibility: Visibility::Visible,
transform: Transform::from_xyz(0.0, 0.0,
0.0).with_scale(Vec3::splat(3.0)),
            ..default()
        },
        FogVolume {
density_texture: Some(assets.load("volumes/fog_noise.ktx2")),
            density_texture_offset: Vec3::new(0.0, 0.0, 0.4),
            scattering: 1.0,
            ..default()
        },
    ));
}
</pre>
</details>
2024-08-24 00:56:39 +00:00
Jiří Švejda
510fce9af3
Allow fog density texture to be scrolled over time with an offset (#14868)
# Objective

- The goal of this PR is to make it possible to move the density texture
of a `FogVolume` over time in order to create dynamic effects like fog
moving in the wind.
- You could theoretically move the `FogVolume` itself, but this is not
ideal, because the `FogVolume` AABB would eventually leave the area. If
you want an area to remain foggy while also creating the impression that
the fog is moving in the wind, a scrolling density texture is a better
solution.

## Solution

- The PR adds a `density_texture_offset` field to the `FogVolume`
component. This offset is in the UVW coordinates of the density texture,
meaning that a value of `(0.5, 0.0, 0.0)` moves the 3d texture by half
along the x-axis.
- Values above 1.0 are wrapped, a 1.5 offset is the same as a 0.5
offset. This makes it so that the density texture wraps around on the
other side, meaning that a repeating 3d noise texture can seamlessly
scroll forever. It also makes it easy to move the density texture over
time by simply increasing the offset every frame.

## Testing

- A `scrolling_fog` example has been added to demonstrate the feature.
It uses the offset to scroll a repeating 3d noise density texture to
create the impression of fog moving in the wind.
- The camera is looking at a pillar with the sun peaking behind it. This
highlights the effect the changing density has on the volumetric
lighting interactions.
- Temporal anti-aliasing combined with the `jitter` option of
`VolumetricFogSettings` is used to improve the quality of the effect.

---

## Showcase


https://github.com/user-attachments/assets/3aa50ebd-771c-4c99-ab5d-255c0c3be1a8
2024-08-22 19:43:14 +00:00
EdJoPaTo
938d810766
Apply unused_qualifications lint (#14828)
# Objective

Fixes #14782

## Solution

Enable the lint and fix all upcoming hints (`--fix`). Also tried to
figure out the false-positive (see review comment). Maybe split this PR
up into multiple parts where only the last one enables the lint, so some
can already be merged resulting in less many files touched / less
potential for merge conflicts?

Currently, there are some cases where it might be easier to read the
code with the qualifier, so perhaps remove the import of it and adapt
its cases? In the current stage it's just a plain adoption of the
suggestions in order to have a base to discuss.

## Testing

`cargo clippy` and `cargo run -p ci` are happy.
2024-08-21 12:29:33 +00:00
akimakinai
c1c003d3c7
Fix num_cascades in split_screen exmample for WebGL (#14601)
# Objective

- Fixes #14595

## Solution

- Use `num_cascades: 1` in WebGL build.
`CascadeShadowConfigBuilder::default()` gives this number in WebGL:
8235daaea0/crates/bevy_pbr/src/light/mod.rs (L241-L248)

## Testing

- Tested the modified example in WebGL with Firefox/Chrome

---------

Co-authored-by: JMS55 <47158642+JMS55@users.noreply.github.com>
2024-08-04 13:57:22 +00:00
Jan Hohenheim
6f7c554daa
Fix common capitalization errors in documentation (#14562)
WASM -> Wasm
MacOS -> macOS

Nothing important, just something that annoyed me for a while :)
2024-07-31 21:16:05 +00:00
Sarthak Singh
a9f4fd8ea1
Disabled usage of the POLYGON_MODE_LINE gpu feature in the examples (#14402)
Fixes #14353
Fixes #14371

---------

Signed-off-by: Sarthak Singh <sarthak.singh99@gmail.com>
Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
Co-authored-by: BD103 <59022059+BD103@users.noreply.github.com>
2024-07-29 23:40:39 +00:00
re0312
65628ed4aa
fix meshlet example (#14471)
# Objective

- meshlet example has broken since #14273

## Solution

- disable msaa in meshlet example

Co-authored-by: François Mockers <mockersf@gmail.com>
2024-07-25 15:22:11 +00:00
François Mockers
8dc6ccfbe7
fix examples after the switch for msaa to a component (#14446)
# Objective

- #14273 changed MSAA to a component, and broke some examples

- SSAO needs MSAA to be disabled

f0ff7fb544/crates/bevy_pbr/src/ssao/mod.rs (L495)

- `AlphaMode::AlphaToCoverage` needs MSAA to be not off to do something

f0ff7fb544/examples/3d/transparency_3d.rs (L113-L117)

# Solution

- change MSAA in those examples
2024-07-24 01:22:00 +00:00
charlotte
03fd1b46ef
Move Msaa to component (#14273)
Switches `Msaa` from being a globally configured resource to a per
camera view component.

Closes #7194

# Objective

Allow individual views to describe their own MSAA settings. For example,
when rendering to different windows or to different parts of the same
view.

## Solution

Make `Msaa` a component that is required on all camera bundles.

## Testing

Ran a variety of examples to ensure that nothing broke.

TODO:
- [ ] Make sure android still works per previous comment in
`extract_windows`.

---

## Migration Guide

`Msaa` is no longer configured as a global resource, and should be
specified on each spawned camera if a non-default setting is desired.

---------

Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
Co-authored-by: François Mockers <francois.mockers@vleue.com>
2024-07-22 18:28:23 +00:00
Rob Parrett
7fb927f725
Fix button placement in split_screen example (#14405)
# Objective

Fixes the buttons in `split_screen` touching the edge of the viewport.

## Solution

This seems like it might potentially be "normal css-like" behavior with
absolutely positioned nodes and padding.
<details>
<summary>HTML test</summary>

```html
<html>
<body>
    <div style="width: 100%; height: 100%; padding: 20px;">
        <div style="width: 100%; height: 100%; padding: 20px; display: flex; justify-content: space-between; align-items: center">
            <div style="width: 40px; height: 40px; border: 1px solid black;">&lt;</div>
            <div style="width: 40px; height: 40px; border: 1px solid black;">&gt;</div>
        </div>
    </div>
</body>
</html>
```

</details>

Instead I just removed the padding from the root node.

## Testing

Added ui debug gizmos to the example and checked before/after.

Before:
<img width="1280" alt="Screenshot 2024-07-20 at 9 23 09 AM"
src="https://github.com/user-attachments/assets/f3cac637-8de9-4acf-bb13-994791998bb7">

After:
<img width="1280" alt="Screenshot 2024-07-20 at 9 37 27 AM"
src="https://github.com/user-attachments/assets/4d3c23b4-5a48-45da-b8a5-a394fd34a44b">
2024-07-20 17:17:14 +00:00
charlotte
3aa525885b
Set scissor on upscale to match camera viewport (#14287)
# Objective

When the user renders multiple cameras to the same output texture, it
can sometimes be confusing what `ClearColorConfig` is necessary for each
camera to avoid overwriting the previous camera's output. This is
particular true in cases where the user uses mixed HDR cameras, which
means that their scene is being rendered to different internal textures.

## Solution

When a view has a configured viewport, set the GPU scissor in the
upscaling node so we don't overwrite areas that were written to by other
cameras.

## Testing

Ran the `split_screen` example.
2024-07-20 16:45:04 +00:00
Sou1gh0st
9da18cce2a
Add support for environment map transformation (#14290)
# Objective

- Fixes: https://github.com/bevyengine/bevy/issues/14036

## Solution

- Add a world space transformation for the environment sample direction.

## Testing

- I have tested the newly added `transform` field using the newly added
`rotate_environment_map` example.


https://github.com/user-attachments/assets/2de77c65-14bc-48ee-b76a-fb4e9782dbdb


## Migration Guide

- Since we have added a new filed to the `EnvironmentMapLight` struct,
users will need to include `..default()` or some rotation value in their
initialization code.
2024-07-19 15:00:50 +00:00
Patrick Walton
20c6bcdba4
Allow volumetric fog to be localized to specific, optionally voxelized, regions. (#14099)
Currently, volumetric fog is global and affects the entire scene
uniformly. This is inadequate for many use cases, such as local smoke
effects. To address this problem, this commit introduces *fog volumes*,
which are axis-aligned bounding boxes (AABBs) that specify fog
parameters inside their boundaries. Such volumes can also specify a
*density texture*, a 3D texture of voxels that specifies the density of
the fog at each point.

To create a fog volume, add a `FogVolume` component to an entity (which
is included in the new `FogVolumeBundle` convenience bundle). Like light
probes, a fog volume is conceptually a 1×1×1 cube centered on the
origin; a transform can be used to position and resize this region. Many
of the fields on the existing `VolumetricFogSettings` have migrated to
the new `FogVolume` component. `VolumetricFogSettings` on a camera is
still needed to enable volumetric fog. However, by itself
`VolumetricFogSettings` is no longer sufficient to enable volumetric
fog; a `FogVolume` must be present. Applications that wish to retain the
old global fog behavior can simply surround the scene with a large fog
volume.

By way of implementation, this commit converts the volumetric fog shader
from a full-screen shader to one applied to a mesh. The strategy is
different depending on whether the camera is inside or outside the fog
volume. If the camera is inside the fog volume, the mesh is simply a
plane scaled to the viewport, effectively falling back to a full-screen
pass. If the camera is outside the fog volume, the mesh is a cube
transformed to coincide with the boundaries of the fog volume's AABB.
Importantly, in the latter case, only the front faces of the cuboid are
rendered. Instead of treating the boundaries of the fog as a sphere
centered on the camera position, as we did prior to this patch, we
raytrace the far planes of the AABB to determine the portion of each ray
contained within the fog volume. We then raymarch in shadow map space as
usual. If a density texture is present, we modulate the fixed density
value with the trilinearly-interpolated value from that texture.

Furthermore, this patch introduces optional jitter to fog volumes,
intended for use with TAA. This modifies the position of the ray from
frame to frame using interleaved gradient noise, in order to reduce
aliasing artifacts. Many implementations of volumetric fog in games use
this technique. Note that this patch makes no attempt to write a motion
vector; this is because when a view ray intersects multiple voxels
there's no single direction of motion. Consequently, fog volumes can
have ghosting artifacts, but because fog is "ghostly" by its nature,
these artifacts are less objectionable than they would be for opaque
objects.

A new example, `fog_volumes`, has been added. It demonstrates a single
fog volume containing a voxelized representation of the Stanford bunny.
The existing `volumetric_fog` example has been updated to use the new
local volumetrics API.

## Changelog

### Added

* Local `FogVolume`s are now supported, to localize fog to specific
regions. They can optionally have 3D density voxel textures for precise
control over the distribution of the fog.

### Changed

* `VolumetricFogSettings` on a camera no longer enables volumetric fog;
instead, it simply enables the processing of `FogVolume`s within the
scene.

## Migration Guide

* A `FogVolume` is now necessary in order to enable volumetric fog, in
addition to `VolumetricFogSettings` on the camera. Existing uses of
volumetric fog can be migrated by placing a large `FogVolume`
surrounding the scene.

---------

Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
Co-authored-by: François Mockers <mockersf@gmail.com>
2024-07-16 03:14:12 +00:00
Sou1gh0st
65aae92127
Add support for skybox transformation (#14267)
# Objective

- Fixes https://github.com/bevyengine/bevy/issues/14036

## Solution

- Add a view space transformation for the skybox

## Testing

- I have tested the newly added `transform` field using the `skybox`
example.
```
diff --git a/examples/3d/skybox.rs b/examples/3d/skybox.rs
index beaf5b268..d16cbe988 100644
--- a/examples/3d/skybox.rs
+++ b/examples/3d/skybox.rs
@@ -81,6 +81,7 @@ fn setup(mut commands: Commands, asset_server: Res<AssetServer>) {
         Skybox {
             image: skybox_handle.clone(),
             brightness: 1000.0,
+            rotation: Quat::from_rotation_x(PI * -0.5),
         },
     ));
```
<img width="1280" alt="image"
src="https://github.com/bevyengine/bevy/assets/6300263/1230a608-58ea-492d-a811-90c54c3b43ef">


## Migration Guide
- Since we have added a new filed to the Skybox struct, users will need
to include `..Default::default()` or some rotation value in their
initialization code.
2024-07-15 15:53:20 +00:00
JMS55
6e8d43a037
Faster MeshletMesh deserialization (#14193)
# Objective
- Using bincode to deserialize binary into a MeshletMesh is expensive
(~77ms for a 5mb file).

## Solution
- Write a custom deserializer using bytemuck's Pod types and slice
casting.
  - Total asset load time has gone from ~102ms to ~12ms.
- Change some types I never meant to be public to private and other misc
cleanup.

## Testing
- Ran the meshlet example and added timing spans to the asset loader.

---

## Changelog
- Improved `MeshletMesh` loading speed
- The `MeshletMesh` disk format has changed, and
`MESHLET_MESH_ASSET_VERSION` has been bumped
- `MeshletMesh` fields are now private
- Renamed `MeshletMeshSaverLoad` to `MeshletMeshSaverLoader`
- The `Meshlet`, `MeshletBoundingSpheres`, and `MeshletBoundingSphere`
types are now private
- Removed `MeshletMeshSaveOrLoadError::SerializationOrDeserialization`
- Added `MeshletMeshSaveOrLoadError::WrongFileType`

## Migration Guide
- Regenerate your `MeshletMesh` assets, as the disk format has changed,
and `MESHLET_MESH_ASSET_VERSION` has been bumped
- `MeshletMesh` fields are now private
- `MeshletMeshSaverLoad` is now named `MeshletMeshSaverLoader`
- The `Meshlet`, `MeshletBoundingSpheres`, and `MeshletBoundingSphere`
types are now private
- `MeshletMeshSaveOrLoadError::SerializationOrDeserialization` has been
removed
- Added `MeshletMeshSaveOrLoadError::WrongFileType`, match on this
variant if you match on `MeshletMeshSaveOrLoadError`
2024-07-15 15:06:02 +00:00
Patrick Walton
fcda67e894
Start a built-in postprocessing stack, and implement chromatic aberration in it. (#13695)
This commit creates a new built-in postprocessing shader that's designed
to hold miscellaneous postprocessing effects, and starts it off with
chromatic aberration. Possible future effects include vignette, film
grain, and lens distortion.

[Chromatic aberration] is a common postprocessing effect that simulates
lenses that fail to focus all colors of light to a single point. It's
often used for impact effects and/or horror games. This patch uses the
technique from *Inside* ([Gjøl & Svendsen 2016]), which allows the
developer to customize the particular color pattern to achieve different
effects. Unity HDRP uses the same technique, while Unreal has a
hard-wired fixed color pattern.

A new example, `post_processing`, has been added, in order to
demonstrate the technique. The existing `post_processing` shader has
been renamed to `custom_post_processing`, for clarity.

[Chromatic aberration]:
https://en.wikipedia.org/wiki/Chromatic_aberration

[Gjøl & Svendsen 2016]:
https://github.com/playdeadgames/publications/blob/master/INSIDE/rendering_inside_gdc2016.pdf

![Screenshot 2024-06-04
180304](https://github.com/bevyengine/bevy/assets/157897/3631c64f-a615-44fe-91ca-7f04df0a54b2)

![Screenshot 2024-06-04
180743](https://github.com/bevyengine/bevy/assets/157897/ee055cbf-4314-49c5-8bfa-8d8a17bd52bb)

## Changelog

### Added

* Chromatic aberration is now available as a built-in postprocessing
effect. To use it, add `ChromaticAberration` to your camera.
2024-07-15 13:59:02 +00:00
Johannes Vollmer
57d05927d6
update gltf example to use type-safe GltfAssetLabel::Scene (#14218)
# Objective

update the `load_gltf_extras.rs` example to the newest bevy api

## Solution

uses the new type-safe code for loading the scene #0 from the gltf
instead of a path suffix

## Testing

the example runs as expected
2024-07-14 15:42:32 +00:00
TotalKrill
5986d5d309
Cosmic text (#10193)
# Replace ab_glyph with the more capable cosmic-text

Fixes #7616.

Cosmic-text is a more mature text-rendering library that handles scripts
and ligatures better than ab_glyph, it can also handle system fonts
which can be implemented in bevy in the future

Rebase of https://github.com/bevyengine/bevy/pull/8808

## Changelog

Replaces text renderer ab_glyph with cosmic-text

The definition of the font size has changed with the migration to cosmic
text. The behavior is now consistent with other platforms (e.g. the
web), where the font size in pixels measures the height of the font (the
distance between the top of the highest ascender and the bottom of the
lowest descender). Font sizes in your app need to be rescaled to
approximately 1.2x smaller; for example, if you were using a font size
of 60.0, you should now use a font size of 50.0.

## Migration guide

- `Text2dBounds` has been replaced with `TextBounds`, and it now accepts
`Option`s to the bounds, instead of using `f32::INFINITY` to inidicate
lack of bounds
- Textsizes should be changed, dividing the current size with 1.2 will
result in the same size as before.
- `TextSettings` struct is removed
- Feature `subpixel_alignment` has been removed since cosmic-text
already does this automatically
- TextBundles and things rendering texts requires the `CosmicBuffer`
Component on them as well

## Suggested followups:

- TextPipeline: reconstruct byte indices for keeping track of eventual
cursors in text input
- TextPipeline: (future work) split text entities into section entities
- TextPipeline: (future work) text editing
- Support line height as an option. Unitless `1.2` is the default used
in browsers (1.2x font size).
- Support System Fonts and font families
- Example showing of animated text styles. Eg. throbbing hyperlinks

---------

Co-authored-by: tigregalis <anak.harimau@gmail.com>
Co-authored-by: Nico Burns <nico@nicoburns.com>
Co-authored-by: sam edelsten <samedelsten1@gmail.com>
Co-authored-by: Dimchikkk <velo.app1@gmail.com>
Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
Co-authored-by: Rob Parrett <robparrett@gmail.com>
2024-07-04 20:41:08 +00:00
Lura
856b39d821
Apply Clippy lints regarding lazy evaluation and closures (#14015)
# Objective

- Lazily evaluate
[default](https://rust-lang.github.io/rust-clippy/master/index.html#/unwrap_or_default)~~/[or](https://rust-lang.github.io/rust-clippy/master/index.html#/or_fun_call)~~
values where it makes sense
  - ~~`unwrap_or(foo())` -> `unwrap_or_else(|| foo())`~~
  - `unwrap_or(Default::default())` -> `unwrap_or_default()`
  - etc.
- Avoid creating [redundant
closures](https://rust-lang.github.io/rust-clippy/master/index.html#/redundant_closure),
even for [method
calls](https://rust-lang.github.io/rust-clippy/master/index.html#/redundant_closure_for_method_calls)
  - `map(|something| something.into())` -> `map(Into:into)`

## Solution

- Apply Clippy lints:
-
~~[or_fun_call](https://rust-lang.github.io/rust-clippy/master/index.html#/or_fun_call)~~
-
[unwrap_or_default](https://rust-lang.github.io/rust-clippy/master/index.html#/unwrap_or_default)
-
[redundant_closure_for_method_calls](https://rust-lang.github.io/rust-clippy/master/index.html#/redundant_closure_for_method_calls)
([redundant
closures](https://rust-lang.github.io/rust-clippy/master/index.html#/redundant_closure)
is already enabled)

## Testing

- Tested on Windows 11 (`stable-x86_64-pc-windows-gnu`, 1.79.0)
- Bevy compiles without errors or warnings and examples seem to work as
intended
  - `cargo clippy` 
  - `cargo run -p ci -- compile` 

---------

Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
2024-07-01 15:54:40 +00:00
François Mockers
ee63cf45c6
fix examples color_grading and mobile after BackgroundColor changes (#14033)
# Objective

- #14017 changed how `UiImage` and `BackgroundColor` work
- one change was missed in example `color_grading`, another in the
mobile example

## Solution

- Change it in the examples
2024-06-26 12:54:23 +00:00
Alice Cecile
336fddb101
Make default behavior for BackgroundColor and BorderColor more intuitive (#14017)
# Objective

In Bevy 0.13, `BackgroundColor` simply tinted the image of any
`UiImage`. This was confusing: in every other case (e.g. Text), this
added a solid square behind the element. #11165 changed this, but
removed `BackgroundColor` from `ImageBundle` to avoid confusion, since
the semantic meaning had changed.

However, this resulted in a serious UX downgrade / inconsistency, as
this behavior was no longer part of the bundle (unlike for `TextBundle`
or `NodeBundle`), leaving users with a relatively frustrating upgrade
path.

Additionally, adding both `BackgroundColor` and `UiImage` resulted in a
bizarre effect, where the background color was seemingly ignored as it
was covered by a solid white placeholder image.

Fixes #13969.

## Solution

Per @viridia's design:

> - if you don't specify a background color, it's transparent.
> - if you don't specify an image color, it's white (because it's a
multiplier).
> - if you don't specify an image, no image is drawn.
> - if you specify both a background color and an image color, they are
independent.
> - the background color is drawn behind the image (in whatever pixels
are transparent)

As laid out by @benfrankel, this involves:

1. Changing the default `UiImage` to use a transparent texture but a
pure white tint.
2. Adding `UiImage::solid_color` to quickly set placeholder images.
3. Changing the default `BorderColor` and `BackgroundColor` to
transparent.
4. Removing the default overrides for these values in the other assorted
UI bundles.
5. Adding `BackgroundColor` back to `ImageBundle` and `ButtonBundle`.
6. Adding a 1x1 `Image::transparent`, which can be accessed from
`Assets<Image>` via the `TRANSPARENT_IMAGE_HANDLE` constant.

Huge thanks to everyone who helped out with the design in the linked
issue and [the Discord
thread](https://discord.com/channels/691052431525675048/1255209923890118697/1255209999278280844):
this was very much a joint design.

@cart helped me figure out how to set the UiImage's default texture to a
transparent 1x1 image, which is a much nicer fix.

## Testing

I've checked the examples modified by this PR, and the `ui` example as
well just to be sure.

## Migration Guide

- `BackgroundColor` no longer tints the color of images in `ImageBundle`
or `ButtonBundle`. Set `UiImage::color` to tint images instead.
- The default texture for `UiImage` is now a transparent white square.
Use `UiImage::solid_color` to quickly draw debug images.
- The default value for `BackgroundColor` and `BorderColor` is now
transparent. Set the color to white manually to return to previous
behavior.
2024-06-25 21:50:41 +00:00
Rob Parrett
e46e246581
Fix a few "repeated word" typos (#13955)
# Objective

Stumbled on one of these and went digging for more

## Solution

```diff
- word word
+ word
```
2024-06-20 21:35:20 +00:00
Rob Parrett
c66c2c7420
Omit font size where it closely matches the default in examples (#13952)
# Objective

In a few examples, we're specifying a font or font size that is the same
as the current default value. Might as well use the default. That'll be
one less thing to worry about if we ever need to change the default font
size. (wink)

In a few others, we were using a value of `25.0` and it didn't seem like
it was different for an important reason, so I switched to the default
there too.

(There are a bunch of examples that use non-default font sizes for
various reasons. Not trying address them all here.)
2024-06-20 21:01:28 +00:00
Kornel
435d9bc02c
Highlight dependency on shader files in examples (#13824)
The examples won't work when copy-pasted to another project, without
also copying their shader files. This change adds constants at the top
of the files to bring attention to the dependencies.

Follow up to
[#13624](https://github.com/bevyengine/bevy/pull/13624#issuecomment-2143872791)
2024-06-12 14:16:01 +00:00
Lynn
fd82ef8956
Meshable extrusions (#13478)
# Objective

- Implement `Meshable` for `Extrusion<T>`

## Solution

- `Meshable` requires `Meshable::Output: MeshBuilder` now. This means
that all `some_primitive.mesh()` calls now return a `MeshBuilder`. These
were added for primitives that did not have one prior to this.
- A new trait `Extrudable: MeshBuilder` has been added. This trait
allows you to specify the indices of the perimeter of the mesh created
by this `MeshBuilder` and whether they are to be shaded smooth or flat.
- `Extrusion<P: Primitive2d + Meshable>` is now `Meshable` aswell. The
associated `MeshBuilder` is `ExtrusionMeshBuilder` which is generic over
`P` and uses the `MeshBuilder` of its baseshape internally.
- `ExtrusionMeshBuilder` exposes the configuration functions of its
base-shapes builder.
- Updated the `3d_shapes` example to include `Extrusion`s

## Migration Guide

- Depending on the context, you may need to explicitly call
`.mesh().build()` on primitives where you have previously called
`.mesh()`
- The `Output` type of custom `Meshable` implementations must now derive
`MeshBuilder`.

## Additional information
- The extrusions UVs are done so that 
- the front face (`+Z`) is in the area between `(0, 0)` and `(0.5,
0.5)`,
- the back face (`-Z`) is in the area between `(0.5, 0)` and `(1, 0.5)`
- the mantle is in the area between `(0, 0.5)` and `(1, 1)`. Each
`PerimeterSegment` you specified in the `Extrudable` implementation will
be allocated an equal portion of this area.
- The UVs of the base shape are scaled to be in the front/back area so
whatever method of filling the full UV-space the base shape used is how
these areas will be filled.

Here is an example of what that looks like on a capsule:


https://github.com/bevyengine/bevy/assets/62256001/425ad288-fbbc-4634-9d3f-5e846cdce85f

This is the texture used:

![extrusion
uvs](https://github.com/bevyengine/bevy/assets/62256001/4e54e421-bfda-44b9-8571-412525cebddf)

The `3d_shapes` example now looks like this:


![image_2024-05-22_235915753](https://github.com/bevyengine/bevy/assets/62256001/3d8bc86d-9ed1-47f2-899a-27aac0a265dd)

---------

Co-authored-by: Lynn Büttgenbach <62256001+solis-lumine-vorago@users.noreply.github.com>
Co-authored-by: Matty <weatherleymatthew@gmail.com>
Co-authored-by: Matty <2975848+mweatherley@users.noreply.github.com>
2024-06-04 17:27:32 +00:00
Patrick Walton
ace4c6023b
Implement subpixel morphological antialiasing, or SMAA. (#13423)
This commit implements a large subset of [*subpixel morphological
antialiasing*], better known as SMAA. SMAA is a 2011 antialiasing
technique that detects jaggies in an aliased image and smooths them out.
Despite its age, it's been a continual staple of games for over a
decade. Four quality presets are available: *low*, *medium*, *high*, and
*ultra*. I set the default to *high*, on account of modern GPUs being
significantly faster than they were in 2011.

Like the already-implemented FXAA, SMAA works on an unaliased image.
Unlike FXAA, it requires three passes: (1) edge detection; (2) blending
weight calculation; (3) neighborhood blending. Each of the first two
passes writes an intermediate texture for use by the next pass. The
first pass also writes to a stencil buffer in order to dramatically
reduce the number of pixels that the second pass has to examine. Also
unlike FXAA, two built-in lookup textures are required; I bundle them
into the library in compressed KTX2 format.

The [reference implementation of SMAA] is in HLSL, with abundant use of
preprocessor macros to achieve GLSL compatibility. Unfortunately, the
reference implementation predates WGSL by over a decade, so I had to
translate the HLSL to WGSL manually. As much as was reasonably possible
without sacrificing readability, I tried to translate line by line,
preserving comments, both to aid reviewing and to allow patches to the
HLSL to more easily apply to the WGSL. Most of SMAA's features are
supported, but in the interests of making this patch somewhat less huge,
I skipped a few of the more exotic ones:

* The temporal variant is currently unsupported. This is and has been
used in shipping games, so supporting temporal SMAA would be useful
follow-up work. It would, however, require some significant work on TAA
to ensure compatibility, so I opted to skip it in this patch.

* Depth- and chroma-based edge detection are unimplemented; only luma
is. Depth is lower-quality, but faster; chroma is higher-quality, but
slower. Luma is the suggested default edge detection algorithm. (Note
that depth-based edge detection wouldn't work on WebGL 2 anyway, because
of the Naga bug whereby depth sampling is miscompiled in GLSL. This is
the same bug that prevents depth of field from working on that
platform.)

* Predicated thresholding is currently unsupported.

* My implementation is incompatible with SSAA and MSAA, unlike the
original; MSAA must be turned off to use SMAA in Bevy. I believe this
feature was rarely used in practice.

The `anti_aliasing` example has been updated to allow experimentation
with and testing of the different SMAA quality presets. Along the way, I
refactored the example's help text rendering code a bit to eliminate
code repetition.

SMAA is fully supported on WebGL 2.

Fixes #9819.

[*subpixel morphological antialiasing*]: https://www.iryoku.com/smaa/

[reference implementation of SMAA]: https://github.com/iryoku/smaa

## Changelog

### Added

* Subpixel morphological antialiasing, or SMAA, is now available. To use
it, add the `SmaaSettings` component to your `Camera`.

![Screenshot 2024-05-18
134311](https://github.com/bevyengine/bevy/assets/157897/ffbd611c-1b32-4491-b2e2-e410688852ee)

---------

Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
2024-06-04 17:07:34 +00:00
Patrick Walton
df8ccb8735
Implement PBR anisotropy per KHR_materials_anisotropy. (#13450)
This commit implements support for physically-based anisotropy in Bevy's
`StandardMaterial`, following the specification for the
[`KHR_materials_anisotropy`] glTF extension.

[*Anisotropy*] (not to be confused with [anisotropic filtering]) is a
PBR feature that allows roughness to vary along the tangent and
bitangent directions of a mesh. In effect, this causes the specular
light to stretch out into lines instead of a round lobe. This is useful
for modeling brushed metal, hair, and similar surfaces. Support for
anisotropy is a common feature in major game and graphics engines;
Unity, Unreal, Godot, three.js, and Blender all support it to varying
degrees.

Two new parameters have been added to `StandardMaterial`:
`anisotropy_strength` and `anisotropy_rotation`. Anisotropy strength,
which ranges from 0 to 1, represents how much the roughness differs
between the tangent and the bitangent of the mesh. In effect, it
controls how stretched the specular highlight is. Anisotropy rotation
allows the roughness direction to differ from the tangent of the model.

In addition to these two fixed parameters, an *anisotropy texture* can
be supplied. Such a texture should be a 3-channel RGB texture, where the
red and green values specify a direction vector using the same
conventions as a normal map ([0, 1] color values map to [-1, 1] vector
values), and the the blue value represents the strength. This matches
the format that the [`KHR_materials_anisotropy`] specification requires.
Such textures should be loaded as linear and not sRGB. Note that this
texture does consume one additional texture binding in the standard
material shader.

The glTF loader has been updated to properly parse the
`KHR_materials_anisotropy` extension.

A new example, `anisotropy`, has been added. This example loads and
displays the barn lamp example from the [`glTF-Sample-Assets`]
repository. Note that the textures were rather large, so I shrunk them
down and converted them to a mixture of JPEG and KTX2 format, in the
interests of saving space in the Bevy repository.

[*Anisotropy*]:
https://google.github.io/filament/Filament.md.html#materialsystem/anisotropicmodel

[anisotropic filtering]:
https://en.wikipedia.org/wiki/Anisotropic_filtering

[`KHR_materials_anisotropy`]:
https://github.com/KhronosGroup/glTF/blob/main/extensions/2.0/Khronos/KHR_materials_anisotropy/README.md

[`glTF-Sample-Assets`]:
https://github.com/KhronosGroup/glTF-Sample-Assets/

## Changelog

### Added

* Physically-based anisotropy is now available for materials, which
enhances the look of surfaces such as brushed metal or hair. glTF scenes
can use the new feature with the `KHR_materials_anisotropy` extension.

## Screenshots

With anisotropy:
![Screenshot 2024-05-20
233414](https://github.com/bevyengine/bevy/assets/157897/379f1e42-24e9-40b6-a430-f7d1479d0335)

Without anisotropy:
![Screenshot 2024-05-20
233420](https://github.com/bevyengine/bevy/assets/157897/aa220f05-b8e7-417c-9671-b242d4bf9fc4)
2024-06-03 23:46:06 +00:00
Ricky Taylor
9b9d3d81cb
Normalise matrix naming (#13489)
# Objective
- Fixes #10909
- Fixes #8492

## Solution
- Name all matrices `x_from_y`, for example `world_from_view`.

## Testing
- I've tested most of the 3D examples. The `lighting` example
particularly should hit a lot of the changes and appears to run fine.

---

## Changelog
- Renamed matrices across the engine to follow a `y_from_x` naming,
making the space conversion more obvious.

## Migration Guide
- `Frustum`'s `from_view_projection`, `from_view_projection_custom_far`
and `from_view_projection_no_far` were renamed to
`from_clip_from_world`, `from_clip_from_world_custom_far` and
`from_clip_from_world_no_far`.
- `ComputedCameraValues::projection_matrix` was renamed to
`clip_from_view`.
- `CameraProjection::get_projection_matrix` was renamed to
`get_clip_from_view` (this affects implementations on `Projection`,
`PerspectiveProjection` and `OrthographicProjection`).
- `ViewRangefinder3d::from_view_matrix` was renamed to
`from_world_from_view`.
- `PreviousViewData`'s members were renamed to `view_from_world` and
`clip_from_world`.
- `ExtractedView`'s `projection`, `transform` and `view_projection` were
renamed to `clip_from_view`, `world_from_view` and `clip_from_world`.
- `ViewUniform`'s `view_proj`, `unjittered_view_proj`,
`inverse_view_proj`, `view`, `inverse_view`, `projection` and
`inverse_projection` were renamed to `clip_from_world`,
`unjittered_clip_from_world`, `world_from_clip`, `world_from_view`,
`view_from_world`, `clip_from_view` and `view_from_clip`.
- `GpuDirectionalCascade::view_projection` was renamed to
`clip_from_world`.
- `MeshTransforms`' `transform` and `previous_transform` were renamed to
`world_from_local` and `previous_world_from_local`.
- `MeshUniform`'s `transform`, `previous_transform`,
`inverse_transpose_model_a` and `inverse_transpose_model_b` were renamed
to `world_from_local`, `previous_world_from_local`,
`local_from_world_transpose_a` and `local_from_world_transpose_b` (the
`Mesh` type in WGSL mirrors this, however `transform` and
`previous_transform` were named `model` and `previous_model`).
- `Mesh2dTransforms::transform` was renamed to `world_from_local`.
- `Mesh2dUniform`'s `transform`, `inverse_transpose_model_a` and
`inverse_transpose_model_b` were renamed to `world_from_local`,
`local_from_world_transpose_a` and `local_from_world_transpose_b` (the
`Mesh2d` type in WGSL mirrors this).
- In WGSL, in `bevy_pbr::mesh_functions`, `get_model_matrix` and
`get_previous_model_matrix` were renamed to `get_world_from_local` and
`get_previous_world_from_local`.
- In WGSL, `bevy_sprite::mesh2d_functions::get_model_matrix` was renamed
to `get_world_from_local`.
2024-06-03 16:56:53 +00:00
IceSentry
bb51635481
Add subdivisions to PlaneMeshBuilder (#13580)
# Objective

- Plane subdivision was removed without providing an alternative

## Solution

- Add subdivision to the PlaneMeshBuilder

---

## Migration Guide

If you were using `Plane` `subdivisions`, you now need to use
`Plane3d::default().mesh().subdivisions(10)`

fixes https://github.com/bevyengine/bevy/issues/13258
2024-06-03 13:57:07 +00:00
Mark Moissette
d26900a9ea
add handling of all missing gltf extras: scene, mesh & materials (#13453)
# Objective

- fixes #4823 

## Solution

As outlined in the discussion in the linked issue as the best current
solution, this PR adds specific GltfExtras for
 - scenes 
 - meshes
 - materials

- As it is , it is not a breaking change, I hesitated to rename the
current "GltfExtras" component to "PrimitiveGltfExtras", but that would
result in a breaking change and might be a bit confusing as to what
"primitive" that refers to.
 

## Testing

- I included a bare-bones example & asset (exported gltf file from
Blender) with gltf extras at all the relevant levels : scene, mesh,
material

---

## Changelog
- adds "SceneGltfExtras" injected at the scene level if any
- adds "MeshGltfExtras", injected at the mesh level if any
- adds "MaterialGltfExtras", injected at the mesh level if any: ie if a
mesh has a material that has gltf extras, the component will be injected
there.
2024-06-03 13:16:38 +00:00
François Mockers
5559632977
glTF labels: add enum to avoid misspelling and keep up-to-date list documented (#13586)
# Objective

- Followup to #13548
- It added a list of all possible labels to documentation. This seems
hard to keep up and doesn't stop people from making spelling mistake

## Solution

- Add an enum that can create all the labels possible, and encourage its
use rather than manually typed labels

---------

Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
Co-authored-by: Rob Parrett <robparrett@gmail.com>
2024-05-31 23:25:57 +00:00
Martín Maita
f237cf2441
Updates default Text font size to 24px (#13603)
# Objective

- The default font size is too small to be useful in examples or for
debug text.
- Fixes #13587

## Solution

- Updated the default font size value in `TextStyle` from 12px to 24px.
- Resorted to Text defaults in examples to use the default font size in
most of them.

## Testing

- WIP

---

## Migration Guide

- The default font size has been increased to 24px from 12px. Make sure
you set the font to the appropriate values in places you were using
`Default` text style.
2024-05-31 16:41:27 +00:00
François Mockers
cf7d810980
color_grading: update UI when camera settings changed (#13589)
# Objective

- In #13542 I broke example `color_grading`: the UI is not updated to
reflect changed camera settings
- Fixes #13590.

## Solution

- Update the UI when changing color grading
2024-05-31 06:42:05 +00:00
Rob Parrett
06f733b16f
Use standard instruction text / position in various examples (#13583)
## Objective

Use the "standard" text size / placement for the new text in these
examples.

Continuation of an effort started here:
https://github.com/bevyengine/bevy/pull/8478

This is definitely not comprehensive. I did the ones that were easy to
find and relatively straightforward updates. I meant to just do
`3d_shapes` and `2d_shapes`, but one thing lead to another.

## Solution

Use `font_size: 20.0`, the default (built-in) font, `Color::WHITE`
(default), and `Val::Px(12.)` from the edges of the screen.

There are a few little drive-by cleanups of defaults not being used,
etc.

## Testing

Ran the changed examples, verified that they still look reasonable.
2024-05-30 23:11:23 +00:00
IceSentry
ed042e5f9a
Add wireframe toggle to 2d and 3d shapes example (#13581)
# Objective

- It's nice to be able to see how the mesh look for each primitives

## Solution

- Add a way to toggle wireframes when pressing spacebar
- I also added some text to indicate this is an option


![image](https://github.com/bevyengine/bevy/assets/8348954/d37fe644-65e6-42fa-9420-390150c03c17)

![2d_shapes_GpaCcK3Rek](https://github.com/bevyengine/bevy/assets/8348954/2e977a47-4c3d-44f7-b149-1e67f522f0b6)
2024-05-30 20:00:59 +00:00
Alice Cecile
9d74e16821
Set the default target exposure to the minimum value, not 0 (#13562)
# Objective

- In particularly dark scenes, auto-exposure would lead to an unexpected
darkening of the view.
- Fixes #13446.

## Solution

The average luminance should default to something else than 0.0 instead,
when there are no samples. We set it to `settings.min_log_lum`.

## Testing

I was able to reproduce the problem on the `auto_exposure` example by
setting the point light intensity to 2000 and looking into the
right-hand corner. There was a sudden darkening.

Now, the discontinuity is gone.

---------

Co-authored-by: Alice Cecile <alice.i.cecil@gmail.com>
Co-authored-by: Bram Buurlage <brambuurlage@gmail.com>
2024-05-29 22:37:42 +00:00
andristarr
2ac290dd8f
Swapping back to using From<Color> for StandardMaterial in examples (#13566)
# Objective

Fixes #13547
2024-05-29 13:50:28 +00:00
GitGhillie
f45eddfe82
Set ambient_intensity to 0.0 in volumetric_fog example, correct doc comment (#13531)
# Objective

- Fixes #13521

## Solution

Set `ambient_intensity` to 0.0 in volumetric_fog example.

I chose setting it explicitly over changing the default in order to make
it clear that this needs to be set depending on whether you have an
`EnvironmentMapLight`. See documentation for `ambient_intensity` and
related members.

## Testing

- Run the volumetric_fog example and notice how the light shown in
#13521 is not there anymore, as expected.

---------

Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
2024-05-28 10:55:29 +00:00
François Mockers
d92b1fcef7
small improvements on the color_grading example (#13542)
# Objective

- Small improvements on the `color_grading` example

## Solution

- Simplify button creation by creating them in the default state, the
selected one is automatically selected
- Don't update the UI if not needed
- Also invert the border of the selected button
- Simplify text update
2024-05-27 22:09:38 +00:00
François Mockers
a8751390aa
revert reflections PR changes to the meshlet example (#13539)
# Objective

- #13418 introduced unwanted changes to the meshlet example

## Solution

- Remove them
2024-05-27 19:48:18 +00:00
Patrick Walton
f398674e51
Implement opt-in sharp screen-space reflections for the deferred renderer, with improved raymarching code. (#13418)
This commit, a revamp of #12959, implements screen-space reflections
(SSR), which approximate real-time reflections based on raymarching
through the depth buffer and copying samples from the final rendered
frame. This patch is a relatively minimal implementation of SSR, so as
to provide a flexible base on which to customize and build in the
future. However, it's based on the production-quality [raymarching code
by Tomasz
Stachowiak](https://gist.github.com/h3r2tic/9c8356bdaefbe80b1a22ae0aaee192db).

For a general basic overview of screen-space reflections, see
[1](https://lettier.github.io/3d-game-shaders-for-beginners/screen-space-reflection.html).
The raymarching shader uses the basic algorithm of tracing forward in
large steps, refining that trace in smaller increments via binary
search, and then using the secant method. No temporal filtering or
roughness blurring, is performed at all; for this reason, SSR currently
only operates on very shiny surfaces. No acceleration via the
hierarchical Z-buffer is implemented (though note that
https://github.com/bevyengine/bevy/pull/12899 will add the
infrastructure for this). Reflections are traced at full resolution,
which is often considered slow. All of these improvements and more can
be follow-ups.

SSR is built on top of the deferred renderer and is currently only
supported in that mode. Forward screen-space reflections are possible
albeit uncommon (though e.g. *Doom Eternal* uses them); however, they
require tracing from the previous frame, which would add complexity.
This patch leaves the door open to implementing SSR in the forward
rendering path but doesn't itself have such an implementation.
Screen-space reflections aren't supported in WebGL 2, because they
require sampling from the depth buffer, which Naga can't do because of a
bug (`sampler2DShadow` is incorrectly generated instead of `sampler2D`;
this is the same reason why depth of field is disabled on that
platform).

To add screen-space reflections to a camera, use the
`ScreenSpaceReflectionsBundle` bundle or the
`ScreenSpaceReflectionsSettings` component. In addition to
`ScreenSpaceReflectionsSettings`, `DepthPrepass` and `DeferredPrepass`
must also be present for the reflections to show up. The
`ScreenSpaceReflectionsSettings` component contains several settings
that artists can tweak, and also comes with sensible defaults.

A new example, `ssr`, has been added. It's loosely based on the
[three.js ocean
sample](https://threejs.org/examples/webgl_shaders_ocean.html), but all
the assets are original. Note that the three.js demo has no screen-space
reflections and instead renders a mirror world. In contrast to #12959,
this demo tests not only a cube but also a more complex model (the
flight helmet).

## Changelog

### Added

* Screen-space reflections can be enabled for very smooth surfaces by
adding the `ScreenSpaceReflections` component to a camera. Deferred
rendering must be enabled for the reflections to appear.

![Screenshot 2024-05-18
143555](https://github.com/bevyengine/bevy/assets/157897/b8675b39-8a89-433e-a34e-1b9ee1233267)

![Screenshot 2024-05-18
143606](https://github.com/bevyengine/bevy/assets/157897/cc9e1cd0-9951-464a-9a08-e589210e5606)
2024-05-27 13:43:40 +00:00
Joona Aalto
383314ef62
Add meshing for ConicalFrustum (#11819)
# Objective

The `ConicalFrustum` primitive should support meshing.

## Solution

Implement meshing for the `ConicalFrustum` primitive. The implementation
is nearly identical to `Cylinder` meshing, but supports two radii.

The default conical frustum is equivalent to a cone with a height of 1
and a radius of 0.5, truncated at half-height.


![kuva](https://github.com/bevyengine/bevy/assets/57632562/b4cab136-ff55-4056-b818-1218e4f38845)
2024-05-25 21:56:09 +00:00
andristarr
44c0325ecd
Emissive is now LinearRgba on StandardMaterial (#13352)
StandardMaterial stores a LinearRgba instead of a Color for emissive

Fixes #13212
2024-05-24 17:23:35 +00:00
Jiří Švejda
4dbfdcf192
Fix lighting example following emissive material changes in #13350 (#13480)
# Objective

After the emissive material changes in #13350, the red and green point
lights in the `lighting` example turned white.

## Solution

This PR gives the point lights the `emissive_exposure_weight` property
in order for them to appear with correct color again.

## Testing

The `lighting` example before this fix:


![image](https://github.com/bevyengine/bevy/assets/143610747/be31d422-f616-4651-ab63-18ddfdba3773)

After this fix (looks the same as before #13350):


![image](https://github.com/bevyengine/bevy/assets/143610747/e5b5eab3-0588-4f30-bf74-2b52db7345ad)
2024-05-23 00:30:30 +00:00
Matty
c7f7d906ca
Tetrahedron mesh (#13463)
# Objective

Allow the `Tetrahedron` primitive to be used for mesh generation. This
is part of ongoing work to bring unify the capabilities of `bevy_math`
primitives.

## Solution

`Tetrahedron` implements `Meshable`. Essentially, each face is just
meshed as a `Triangle3d`, but first there is an inversion step when the
signed volume of the tetrahedron is negative to ensure that the faces
all actually point outward.

## Testing

I loaded up some examples and hackily exchanged existing meshes with the
new one to see that it works as expected.
2024-05-22 12:22:11 +00:00
Johannes Hackel
1fcf6a444f
Add emissive_exposure_weight to the StandardMaterial (#13350)
# Objective

- The emissive color gets multiplied by the camera exposure value. But
this cancels out almost any emissive effect.
- Fixes #13133
- Closes PR #13337 

## Solution
- Add emissive_exposure_weight to the StandardMaterial
- In the shader this value is stored in the alpha channel of the
emissive color.
- This value defines how much the exposure influences the emissive
color.
- It's equal to Google's Filament:
https://google.github.io/filament/Materials.html#emissive

4f021583f1/shaders/src/shading_lit.fs (L287)

## Testing

- The result of
[EmissiveStrengthTest](https://github.com/KhronosGroup/glTF-Sample-Models/tree/main/2.0/EmissiveStrengthTest)
with the default value of 0.0:

without bloom:

![emissive_fix](https://github.com/bevyengine/bevy/assets/688816/8f8c131a-464a-4d7b-a9e4-4e28d679ee5d)

with bloom:

![emissive_fix_bloom](https://github.com/bevyengine/bevy/assets/688816/89f200ee-3bd5-4daa-bf64-8999b56df3fa)
2024-05-17 13:49:53 +00:00
François Mockers
104dcf5a67
example render_to_texture: remove extra light (#13398)
# Objective

- in example `render_to_texture`, #13317 changed the comment on the
existing light saying lights don't work on multiple layers, then add a
light on multiple layers explaining that it will work. it's confusing

## Solution

- Keep the original light, with the updated comment

## Testing

- Run example `render_to_texture`, lighting is correct
2024-05-16 23:26:55 +00:00
Rob Parrett
7cbc0357be
Use load_with_settings instead of manually overriding srgbness in examples (#13399)
# Objective

`parallax_mapping` and `deferred_rendering` both use a roundabout way of
manually overriding the srgbness of their normal map textures.

This can now be done with `load_with_settings` in one line of code.

## Solution

- Delete the override systems and use `load_with_settings` instead
- Make `deferred_rendering`'s instruction text style consistent with
other examples while I'm in there.
    (see #8478)

## Testing

Tested by running with `load` instead of `load_settings` and confirming
that lighting looks bad when `is_srgb` is not configured, and good when
it is.

## Discussion

It would arguably make more sense to configure this in a `.meta` file,
but I used `load_with_settings` because that's how it was done in the
`clearcoat` example and it does seem nice for documentation purposes to
call this out explicitly in code.
2024-05-16 23:26:22 +00:00
Patrick Walton
19bfa41768
Implement volumetric fog and volumetric lighting, also known as light shafts or god rays. (#13057)
This commit implements a more physically-accurate, but slower, form of
fog than the `bevy_pbr::fog` module does. Notably, this *volumetric fog*
allows for light beams from directional lights to shine through,
creating what is known as *light shafts* or *god rays*.

To add volumetric fog to a scene, add `VolumetricFogSettings` to the
camera, and add `VolumetricLight` to directional lights that you wish to
be volumetric. `VolumetricFogSettings` has numerous settings that allow
you to define the accuracy of the simulation, as well as the look of the
fog. Currently, only interaction with directional lights that have
shadow maps is supported. Note that the overhead of the effect scales
directly with the number of directional lights in use, so apply
`VolumetricLight` sparingly for the best results.

The overall algorithm, which is implemented as a postprocessing effect,
is a combination of the techniques described in [Scratchapixel] and
[this blog post]. It uses raymarching in screen space, transformed into
shadow map space for sampling and combined with physically-based
modeling of absorption and scattering. Bevy employs the widely-used
[Henyey-Greenstein phase function] to model asymmetry; this essentially
allows light shafts to fade into and out of existence as the user views
them.

Volumetric rendering is a huge subject, and I deliberately kept the
scope of this commit small. Possible follow-ups include:

1. Raymarching at a lower resolution.

2. A post-processing blur (especially useful when combined with (1)).

3. Supporting point lights and spot lights.

4. Supporting lights with no shadow maps.

5. Supporting irradiance volumes and reflection probes.

6. Voxel components that reuse the volumetric fog code to create voxel
shapes.

7. *Horizon: Zero Dawn*-style clouds.

These are all useful, but out of scope of this patch for now, to keep
things tidy and easy to review.

A new example, `volumetric_fog`, has been added to demonstrate the
effect.

## Changelog

### Added

* A new component, `VolumetricFog`, is available, to allow for a more
physically-accurate, but more resource-intensive, form of fog.

* A new component, `VolumetricLight`, can be placed on directional
lights to make them interact with `VolumetricFog`. Notably, this allows
such lights to emit light shafts/god rays.

![Screenshot 2024-04-21
162808](https://github.com/bevyengine/bevy/assets/157897/7a1fc81d-eed5-4735-9419-286c496391a9)

![Screenshot 2024-04-21
132005](https://github.com/bevyengine/bevy/assets/157897/e6d3b5ca-8f59-488d-a3de-15e95aaf4995)

[Scratchapixel]:
https://www.scratchapixel.com/lessons/3d-basic-rendering/volume-rendering-for-developers/intro-volume-rendering.html

[this blog post]: https://www.alexandre-pestana.com/volumetric-lights/

[Henyey-Greenstein phase function]:
https://www.pbr-book.org/4ed/Volume_Scattering/Phase_Functions#TheHenyeyndashGreensteinPhaseFunction
2024-05-16 17:13:18 +00:00
charlotte
4c3b7679ec
#12502 Remove limit on RenderLayers. (#13317)
# Objective

Remove the limit of `RenderLayer` by using a growable mask using
`SmallVec`.

Changes adopted from @UkoeHB's initial PR here
https://github.com/bevyengine/bevy/pull/12502 that contained additional
changes related to propagating render layers.

Changes

## Solution

The main thing needed to unblock this is removing `RenderLayers` from
our shader code. This primarily affects `DirectionalLight`. We are now
computing a `skip` field on the CPU that is then used to skip the light
in the shader.

## Testing

Checked a variety of examples and did a quick benchmark on `many_cubes`.
There were some existing problems identified during the development of
the original pr (see:
https://discord.com/channels/691052431525675048/1220477928605749340/1221190112939872347).
This PR shouldn't change any existing behavior besides removing the
layer limit (sans the comment in migration about `all` layers no longer
being possible).

---

## Changelog

Removed the limit on `RenderLayers` by using a growable bitset that only
allocates when layers greater than 64 are used.

## Migration Guide

- `RenderLayers::all()` no longer exists. Entities expecting to be
visible on all layers, e.g. lights, should compute the active layers
that are in use.

---------

Co-authored-by: robtfm <50659922+robtfm@users.noreply.github.com>
2024-05-16 16:15:47 +00:00
Patrick Walton
df31b808c3
Implement fast depth of field as a postprocessing effect. (#13009)
This commit implements the [depth of field] effect, simulating the blur
of objects out of focus of the virtual lens. Either the [hexagonal
bokeh] effect or a faster Gaussian blur may be used. In both cases, the
implementation is a simple separable two-pass convolution. This is not
the most physically-accurate real-time bokeh technique that exists;
Unreal Engine has [a more accurate implementation] of "cinematic depth
of field" from 2018. However, it's simple, and most engines provide
something similar as a fast option, often called "mobile" depth of
field.

The general approach is outlined in [a blog post from 2017]. We take
advantage of the fact that both Gaussian blurs and hexagonal bokeh blurs
are *separable*. This means that their 2D kernels can be reduced to a
small number of 1D kernels applied one after another, asymptotically
reducing the amount of work that has to be done. Gaussian blurs can be
accomplished by blurring horizontally and then vertically, while
hexagonal bokeh blurs can be done with a vertical blur plus a diagonal
blur, plus two diagonal blurs. In both cases, only two passes are
needed. Bokeh requires the first pass to have a second render target and
requires two subpasses in the second pass, which decreases its
performance relative to the Gaussian blur.

The bokeh blur is generally more aesthetically pleasing than the
Gaussian blur, as it simulates the effect of a camera more accurately.
The shape of the bokeh circles are determined by the number of blades of
the aperture. In our case, we use a hexagon, which is usually considered
specific to lower-quality cameras. (This is a downside of the fast
hexagon approach compared to the higher-quality approaches.) The blur
amount is generally specified by the [f-number], which we use to compute
the focal length from the film size and FOV. By default, we simulate
standard cinematic cameras of f/1 and [Super 35]. The developer can
customize these values as desired.

A new example has been added to demonstrate depth of field. It allows
customization of the mode (Gaussian vs. bokeh), focal distance and
f-numbers. The test scene is inspired by a [blog post on depth of field
in Unity]; however, the effect is implemented in a completely different
way from that blog post, and all the assets (textures, etc.) are
original.

Bokeh depth of field:
![Screenshot 2024-04-17
152535](https://github.com/bevyengine/bevy/assets/157897/702f0008-1c8a-4cf3-b077-4110f8c46584)

Gaussian depth of field:
![Screenshot 2024-04-17
152542](https://github.com/bevyengine/bevy/assets/157897/f4ece47a-520e-4483-a92d-f4fa760795d3)

No depth of field:
![Screenshot 2024-04-17
152547](https://github.com/bevyengine/bevy/assets/157897/9444e6aa-fcae-446c-b66b-89469f1a1325)

[depth of field]: https://en.wikipedia.org/wiki/Depth_of_field

[hexagonal bokeh]:
https://colinbarrebrisebois.com/2017/04/18/hexagonal-bokeh-blur-revisited/

[a more accurate implementation]:
https://epicgames.ent.box.com/s/s86j70iamxvsuu6j35pilypficznec04

[a blog post from 2017]:
https://colinbarrebrisebois.com/2017/04/18/hexagonal-bokeh-blur-revisited/

[f-number]: https://en.wikipedia.org/wiki/F-number

[Super 35]: https://en.wikipedia.org/wiki/Super_35

[blog post on depth of field in Unity]:
https://catlikecoding.com/unity/tutorials/advanced-rendering/depth-of-field/

## Changelog

### Added

* A depth of field postprocessing effect is now available, to simulate
objects being out of focus of the camera. To use it, add
`DepthOfFieldSettings` to an entity containing a `Camera3d` component.

---------

Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
Co-authored-by: Bram Buurlage <brambuurlage@gmail.com>
2024-05-13 18:23:56 +00:00
Joona Aalto
ac1f135e20
Add meshing for Cone (#11820)
# Objective

The `Cone` primitive should support meshing.

## Solution

Implement meshing for the `Cone` primitive. The default cone has a
height of 1 and a base radius of 0.5, and is centered at the origin.

An issue with cone meshes is that the tip does not really have a normal
that works, even with duplicated vertices. This PR uses only a single
vertex for the tip, with a normal of zero; this results in an "invalid"
normal that gets ignored by the fragment shader. This seems to be the
only approach we have for perfectly smooth cones. For discussion on the
topic, see #10298 and #5891.

Another thing to note is that the cone uses polar coordinates for the
UVs:

<img
src="https://github.com/bevyengine/bevy/assets/57632562/e101ded9-110a-4ac4-a98d-f1e4d740a24a"
alt="cone" width="400" />

This way, textures are applied as if looking at the cone from above:

<img
src="https://github.com/bevyengine/bevy/assets/57632562/8dea00f1-a283-4bc4-9676-91e8d4adb07a"
alt="texture" width="200" />

<img
src="https://github.com/bevyengine/bevy/assets/57632562/d9d1b5e6-a8ba-4690-b599-904dd85777a1"
alt="cone" width="200" />
2024-05-13 18:00:59 +00:00
Rob Parrett
2fd432c463
Fix motion blur on wasm (#13099)
# Objective

Fixes #13097 and other issues preventing the motion blur example from
working on wasm

## Solution

- Use a vec2 for padding
- Fix error initializing the `MotionBlur` struct on wasm+webgl2
- Disable MSAA on wasm+webgl2
- Fix `GlobalsUniform` padding getting added on the shader side for
webgpu builds

## Notes

The motion blur example now runs, but with artifacts. In addition to the
obvious black artifacts, the motion blur or dithering seem to just look
worse in a way I can't really describe. That may be expected.

```
AdapterInfo { name: "ANGLE (Apple, ANGLE Metal Renderer: Apple M1 Max, Unspecified Version)", vendor: 4203, device: 0, device_type: IntegratedGpu, driver: "", driver_info: "", backend: Gl }
```
<img width="1276" alt="Screenshot 2024-04-25 at 6 51 21 AM"
src="https://github.com/bevyengine/bevy/assets/200550/65401d4f-92fe-454b-9dbc-a2d89d3ad963">
2024-05-12 21:03:36 +00:00
Fpgu
60a73fa60b
Use Dir3 for local axis methods in GlobalTransform (#13264)
Switched the return type from `Vec3` to `Dir3` for directional axis
methods within the `GlobalTransform` component.

## Migration Guide
The `GlobalTransform` component's directional axis methods (e.g.,
`right()`, `left()`, `up()`, `down()`, `back()`, `forward()`) have been
updated from returning `Vec3` to `Dir3`.
2024-05-06 20:52:05 +00:00
Patrick Walton
77ed72bc16
Implement clearcoat per the Filament and the KHR_materials_clearcoat specifications. (#13031)
Clearcoat is a separate material layer that represents a thin
translucent layer of a material. Examples include (from the [Filament
spec]) car paint, soda cans, and lacquered wood. This commit implements
support for clearcoat following the Filament and Khronos specifications,
marking the beginnings of support for multiple PBR layers in Bevy.

The [`KHR_materials_clearcoat`] specification describes the clearcoat
support in glTF. In Blender, applying a clearcoat to the Principled BSDF
node causes the clearcoat settings to be exported via this extension. As
of this commit, Bevy parses and reads the extension data when present in
glTF. Note that the `gltf` crate has no support for
`KHR_materials_clearcoat`; this patch therefore implements the JSON
semantics manually.

Clearcoat is integrated with `StandardMaterial`, but the code is behind
a series of `#ifdef`s that only activate when clearcoat is present.
Additionally, the `pbr_feature_layer_material_textures` Cargo feature
must be active in order to enable support for clearcoat factor maps,
clearcoat roughness maps, and clearcoat normal maps. This approach
mirrors the same pattern used by the existing transmission feature and
exists to avoid running out of texture bindings on platforms like WebGL
and WebGPU. Note that constant clearcoat factors and roughness values
*are* supported in the browser; only the relatively-less-common maps are
disabled on those platforms.

This patch refactors the lighting code in `StandardMaterial`
significantly in order to better support multiple layers in a natural
way. That code was due for a refactor in any case, so this is a nice
improvement.

A new demo, `clearcoat`, has been added. It's based on [the
corresponding three.js demo], but all the assets (aside from the skybox
and environment map) are my original work.

[Filament spec]:
https://google.github.io/filament/Filament.html#materialsystem/clearcoatmodel

[`KHR_materials_clearcoat`]:
https://github.com/KhronosGroup/glTF/blob/main/extensions/2.0/Khronos/KHR_materials_clearcoat/README.md

[the corresponding three.js demo]:
https://threejs.org/examples/webgl_materials_physical_clearcoat.html

![Screenshot 2024-04-19
101143](https://github.com/bevyengine/bevy/assets/157897/3444bcb5-5c20-490c-b0ad-53759bd47ae2)

![Screenshot 2024-04-19
102054](https://github.com/bevyengine/bevy/assets/157897/6e953944-75b8-49ef-bc71-97b0a53b3a27)

## Changelog

### Added

* `StandardMaterial` now supports a clearcoat layer, which represents a
thin translucent layer over an underlying material.
* The glTF loader now supports the `KHR_materials_clearcoat` extension,
representing materials with clearcoat layers.

## Migration Guide

* The lighting functions in the `pbr_lighting` WGSL module now have
clearcoat parameters, if `STANDARD_MATERIAL_CLEARCOAT` is defined.

* The `R` reflection vector parameter has been removed from some
lighting functions, as it was unused.
2024-05-05 22:57:05 +00:00
Bram Buurlage
d390420093
Implement Auto Exposure plugin (#12792)
# Objective

- Add auto exposure/eye adaptation to the bevy render pipeline.
- Support features that users might expect from other engines:
  - Metering masks
  - Compensation curves
  - Smooth exposure transitions 

This PR is based on an implementation I already built for a personal
project before https://github.com/bevyengine/bevy/pull/8809 was
submitted, so I wasn't able to adopt that PR in the proper way. I've
still drawn inspiration from it, so @fintelia should be credited as
well.

## Solution

An auto exposure compute shader builds a 64 bin histogram of the scene's
luminance, and then adjusts the exposure based on that histogram. Using
a histogram allows the system to ignore outliers like shadows and
specular highlights, and it allows to give more weight to certain areas
based on a mask.

---

## Changelog

- Added: AutoExposure plugin that allows to adjust a camera's exposure
based on it's scene's luminance.

---------

Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
2024-05-03 17:45:17 +00:00
Patrick Walton
31835ff76d
Implement visibility ranges, also known as hierarchical levels of detail (HLODs). (#12916)
Implement visibility ranges, also known as hierarchical levels of detail
(HLODs).

This commit introduces a new component, `VisibilityRange`, which allows
developers to specify camera distances in which meshes are to be shown
and hidden. Hiding meshes happens early in the rendering pipeline, so
this feature can be used for level of detail optimization. Additionally,
this feature is properly evaluated per-view, so different views can show
different levels of detail.

This feature differs from proper mesh LODs, which can be implemented
later. Engines generally implement true mesh LODs later in the pipeline;
they're typically more efficient than HLODs with GPU-driven rendering.
However, mesh LODs are more limited than HLODs, because they require the
lower levels of detail to be meshes with the same vertex layout and
shader (and perhaps the same material) as the original mesh. Games often
want to use objects other than meshes to replace distant models, such as
*octahedral imposters* or *billboard imposters*.

The reason why the feature is called *hierarchical level of detail* is
that HLODs can replace multiple meshes with a single mesh when the
camera is far away. This can be useful for reducing drawcall count. Note
that `VisibilityRange` doesn't automatically propagate down to children;
it must be placed on every mesh.

Crossfading between different levels of detail is supported, using the
standard 4x4 ordered dithering pattern from [1]. The shader code to
compute the dithering patterns should be well-optimized. The dithering
code is only active when visibility ranges are in use for the mesh in
question, so that we don't lose early Z.

Cascaded shadow maps show the HLOD level of the view they're associated
with. Point light and spot light shadow maps, which have no CSMs,
display all HLOD levels that are visible in any view. To support this
efficiently and avoid doing visibility checks multiple times, we
precalculate all visible HLOD levels for each entity with a
`VisibilityRange` during the `check_visibility_range` system.

A new example, `visibility_range`, has been added to the tree, as well
as a new low-poly version of the flight helmet model to go with it. It
demonstrates use of the visibility range feature to provide levels of
detail.

[1]: https://en.wikipedia.org/wiki/Ordered_dithering#Threshold_map

[^1]: Unreal doesn't have a feature that exactly corresponds to
visibility ranges, but Unreal's HLOD system serves roughly the same
purpose.

## Changelog

### Added

* A new `VisibilityRange` component is available to conditionally enable
entity visibility at camera distances, with optional crossfade support.
This can be used to implement different levels of detail (LODs).

## Screenshots

High-poly model:
![Screenshot 2024-04-09
185541](https://github.com/bevyengine/bevy/assets/157897/7e8be017-7187-4471-8866-974e2d8f2623)

Low-poly model up close:
![Screenshot 2024-04-09
185546](https://github.com/bevyengine/bevy/assets/157897/429603fe-6bb7-4246-8b4e-b4888fd1d3a0)

Crossfading between the two:
![Screenshot 2024-04-09
185604](https://github.com/bevyengine/bevy/assets/157897/86d0d543-f8f3-49ec-8fe5-caa4d0784fd4)

---------

Co-authored-by: Carter Anderson <mcanders1@gmail.com>
2024-05-03 00:11:35 +00:00
François Mockers
1c15ac647a
Example setup for tooling (#13088)
# Objective

- #12755 introduced the need to download a file to run an example
- This means the example fails to run in CI without downloading that
file

## Solution

- Add a new metadata to examples "setup" that provides setup
instructions
- Replace the URL in the meshlet example to one that can actually be
downloaded
- example-showcase execute the setup before running an example
2024-05-02 20:10:09 +00:00
Alice Cecile
b2123ffa41
Fix CI error on new color grading example (#13180)
# Objective

Fixes https://github.com/bevyengine/bevy/issues/13179

## Solution

Obey clippy.

## Commentary

I'm really confused why CI didn't fail on the initial PR merge here.

Co-authored-by: Alice Cecile <alice.i.cecil@gmail.com>
2024-05-02 13:40:45 +00:00
Patrick Walton
961b24deaf
Implement filmic color grading. (#13121)
This commit expands Bevy's existing tonemapping feature to a complete
set of filmic color grading tools, matching those of engines like Unity,
Unreal, and Godot. The following features are supported:

* White point adjustment. This is inspired by Unity's implementation of
the feature, but simplified and optimized. *Temperature* and *tint*
control the adjustments to the *x* and *y* chromaticity values of [CIE
1931]. Following Unity, the adjustments are made relative to the [D65
standard illuminant] in the [LMS color space].

* Hue rotation. This simply converts the RGB value to [HSV], alters the
hue, and converts back.

* Color correction. This allows the *gamma*, *gain*, and *lift* values
to be adjusted according to the standard [ASC CDL combined function].

* Separate color correction for shadows, midtones, and highlights.
Blender's source code was used as a reference for the implementation of
this. The midtone ranges can be adjusted by the user. To avoid abrupt
color changes, a small crossfade is used between the different sections
of the image, again following Blender's formulas.

A new example, `color_grading`, has been added, offering a GUI to change
all the color grading settings. It uses the same test scene as the
existing `tonemapping` example, which has been factored out into a
shared glTF scene.

[CIE 1931]: https://en.wikipedia.org/wiki/CIE_1931_color_space

[D65 standard illuminant]:
https://en.wikipedia.org/wiki/Standard_illuminant#Illuminant_series_D

[LMS color space]: https://en.wikipedia.org/wiki/LMS_color_space

[HSV]: https://en.wikipedia.org/wiki/HSL_and_HSV

[ASC CDL combined function]:
https://en.wikipedia.org/wiki/ASC_CDL#Combined_Function

## Changelog

### Added

* Many new filmic color grading options have been added to the
`ColorGrading` component.

## Migration Guide

* `ColorGrading::gamma` and `ColorGrading::pre_saturation` are now set
separately for the `shadows`, `midtones`, and `highlights` sections. You
can migrate code with the `ColorGrading::all_sections` and
`ColorGrading::all_sections_mut` functions, which access and/or update
all sections at once.
* `ColorGrading::post_saturation` and `ColorGrading::exposure` are now
fields of `ColorGrading::global`.

## Screenshots

![Screenshot 2024-04-27
143144](https://github.com/bevyengine/bevy/assets/157897/c1de5894-917d-4101-b5c9-e644d141a941)

![Screenshot 2024-04-27
143216](https://github.com/bevyengine/bevy/assets/157897/da393c8a-d747-42f5-b47c-6465044c788d)
2024-05-02 12:18:59 +00:00
Patrick Walton
16531fb3e3
Implement GPU frustum culling. (#12889)
This commit implements opt-in GPU frustum culling, built on top of the
infrastructure in https://github.com/bevyengine/bevy/pull/12773. To
enable it on a camera, add the `GpuCulling` component to it. To
additionally disable CPU frustum culling, add the `NoCpuCulling`
component. Note that adding `GpuCulling` without `NoCpuCulling`
*currently* does nothing useful. The reason why `GpuCulling` doesn't
automatically imply `NoCpuCulling` is that I intend to follow this patch
up with GPU two-phase occlusion culling, and CPU frustum culling plus
GPU occlusion culling seems like a very commonly-desired mode.

Adding the `GpuCulling` component to a view puts that view into
*indirect mode*. This mode makes all drawcalls indirect, relying on the
mesh preprocessing shader to allocate instances dynamically. In indirect
mode, the `PreprocessWorkItem` `output_index` points not to a
`MeshUniform` instance slot but instead to a set of `wgpu`
`IndirectParameters`, from which it allocates an instance slot
dynamically if frustum culling succeeds. Batch building has been updated
to allocate and track indirect parameter slots, and the AABBs are now
supplied to the GPU as `MeshCullingData`.

A small amount of code relating to the frustum culling has been borrowed
from meshlets and moved into `maths.wgsl`. Note that standard Bevy
frustum culling uses AABBs, while meshlets use bounding spheres; this
means that not as much code can be shared as one might think.

This patch doesn't provide any way to perform GPU culling on shadow
maps, to avoid making this patch bigger than it already is. That can be
a followup.

## Changelog

### Added

* Frustum culling can now optionally be done on the GPU. To enable it,
add the `GpuCulling` component to a camera.
* To disable CPU frustum culling, add `NoCpuCulling` to a camera. Note
that `GpuCulling` doesn't automatically imply `NoCpuCulling`.
2024-04-28 12:50:00 +00:00
JMS55
e1a0da0fa6
Meshlet LOD-compatible two-pass occlusion culling (#12898)
Keeping track of explicit visibility per cluster between frames does not
work with LODs, and leads to worse culling (using the final depth buffer
from the previous frame is more accurate).

Instead, we need to generate a second depth pyramid after the second
raster pass, and then use that in the first culling pass in the next
frame to test if a cluster would have been visible last frame or not.

As part of these changes, the write_index_buffer pass has been folded
into the culling pass for a large performance gain, and to avoid
tracking a lot of extra state that would be needed between passes.

Prepass previous model/view stuff was adapted to work with meshlets as
well.

Also fixed a bug with materials, and other misc improvements.

---------

Co-authored-by: François <mockersf@gmail.com>
Co-authored-by: atlas dostal <rodol@rivalrebels.com>
Co-authored-by: vero <email@atlasdostal.com>
Co-authored-by: Patrick Walton <pcwalton@mimiga.net>
Co-authored-by: Robert Swain <robert.swain@gmail.com>
2024-04-28 05:30:20 +00:00
Aevyrie
ade70b3925
Per-Object Motion Blur (#9924)
https://github.com/bevyengine/bevy/assets/2632925/e046205e-3317-47c3-9959-fc94c529f7e0

# Objective

- Adds per-object motion blur to the core 3d pipeline. This is a common
effect used in games and other simulations.
- Partially resolves #4710

## Solution

- This is a post-process effect that uses the depth and motion vector
buffers to estimate per-object motion blur. The implementation is
combined from knowledge from multiple papers and articles. The approach
itself, and the shader are quite simple. Most of the effort was in
wiring up the bevy rendering plumbing, and properly specializing for HDR
and MSAA.
- To work with MSAA, the MULTISAMPLED_SHADING wgpu capability is
required. I've extracted this code from #9000. This is because the
prepass buffers are multisampled, and require accessing with
`textureLoad` as opposed to the widely compatible `textureSample`.
- Added an example to demonstrate the effect of motion blur parameters.

## Future Improvements

- While this approach does have limitations, it's one of the most
commonly used, and is much better than camera motion blur, which does
not consider object velocity. For example, this implementation allows a
dolly to track an object, and that object will remain unblurred while
the background is blurred. The biggest issue with this implementation is
that blur is constrained to the boundaries of objects which results in
hard edges. There are solutions to this by either dilating the object or
the motion vector buffer, or by taking a different approach such as
https://casual-effects.com/research/McGuire2012Blur/index.html
- I'm using a noise PRNG function to jitter samples. This could be
replaced with a blue noise texture lookup or similar, however after
playing with the parameters, it gives quite nice results with 4 samples,
and is significantly better than the artifacts generated when not
jittering.

---

## Changelog

- Added: per-object motion blur. This can be enabled and configured by
adding the `MotionBlurBundle` to a camera entity.

---------

Co-authored-by: Torstein Grindvik <52322338+torsteingrindvik@users.noreply.github.com>
2024-04-25 01:16:02 +00:00
JMS55
6d6810c90d
Meshlet continuous LOD (#12755)
Adds a basic level of detail system to meshlets. An extremely brief
summary is as follows:
* In `from_mesh.rs`, once we've built the first level of clusters, we
group clusters, simplify the new mega-clusters, and then split the
simplified groups back into regular sized clusters. Repeat several times
(ideally until you can't anymore). This forms a directed acyclic graph
(DAG), where the children are the meshlets from the previous level, and
the parents are the more simplified versions of their children. The leaf
nodes are meshlets formed from the original mesh.
* In `cull_meshlets.wgsl`, each cluster selects whether to render or not
based on the LOD bounding sphere (different than the culling bounding
sphere) of the current meshlet, the LOD bounding sphere of its parent
(the meshlet group from simplification), and the simplification error
relative to its children of both the current meshlet and its parent
meshlet. This kind of breaks two pass occlusion culling, which will be
fixed in a future PR by using an HZB from the previous frame to get the
initial list of occluders.

Many, _many_ improvements to be done in the future
https://github.com/bevyengine/bevy/issues/11518, not least of which is
code quality and speed. I don't even expect this to work on many types
of input meshes. This is just a basic implementation/draft for
collaboration.

Arguable how much we want to do in this PR, I'll leave that up to
maintainers. I've erred on the side of "as basic as possible".

References:
* Slides 27-77 (video available on youtube)
https://advances.realtimerendering.com/s2021/Karis_Nanite_SIGGRAPH_Advances_2021_final.pdf
*
https://blog.traverseresearch.nl/creating-a-directed-acyclic-graph-from-a-mesh-1329e57286e5
*
https://jglrxavpok.github.io/2024/01/19/recreating-nanite-lod-generation.html,
https://jglrxavpok.github.io/2024/03/12/recreating-nanite-faster-lod-generation.html,
https://jglrxavpok.github.io/2024/04/02/recreating-nanite-runtime-lod-selection.html,
and https://github.com/jglrxavpok/Carrot
*
https://github.com/gents83/INOX/tree/master/crates/plugins/binarizer/src
* https://cs418.cs.illinois.edu/website/text/nanite.html


![image](https://github.com/bevyengine/bevy/assets/47158642/e40bff9b-7d0c-4a19-a3cc-2aad24965977)

![image](https://github.com/bevyengine/bevy/assets/47158642/442c7da3-7761-4da7-9acd-37f15dd13e26)

---------

Co-authored-by: Ricky Taylor <rickytaylor26@gmail.com>
Co-authored-by: vero <email@atlasdostal.com>
Co-authored-by: François <mockersf@gmail.com>
Co-authored-by: atlas dostal <rodol@rivalrebels.com>
Co-authored-by: Patrick Walton <pcwalton@mimiga.net>
2024-04-23 21:43:53 +00:00
Grey
c593ee1055
Clarify comment about camera coordinate system (#13056)
# Objective

Clarify the comment about the camera's coordinate system in
`examples/3d/generate_custom_mesh.rs` by explicitly stating which axes
point where.
Fixes #13018

## Solution

Copy the wording from #13012 into the example.
2024-04-23 14:58:28 +00:00
IceSentry
8403c41c67
Use WireframeColor to override global color (#13034)
# Objective

- The docs says the WireframeColor is supposed to override the default
global color but it doesn't.

## Solution

- Use WireframeColor to override global color like docs said it was
supposed to do.
- Updated the example to document this feature
- I also took the opportunity to clean up the code a bit

Fixes #13032
2024-04-20 13:59:12 +00:00
andristarr
2b3e3341d6
separating finite and infinite 3d planes (#12426)
# Objective

Fixes #12388

## Solution

- Removing the plane3d and adding rect3d primitive mesh
2024-04-18 14:13:22 +00:00
Emily Selwood
da2ba8a43c
Add comment to example about coordinate system (#12991)
# Objective

When learning about creating meshes in bevy using this example I
couldn't tell which coordinate system bevy uses, which caused confusion
and having to look it up else where.

## Solution

Add a comment that says what coordinate system bevy uses.
2024-04-16 12:01:48 +00:00
Patrick Walton
1141e731ff
Implement alpha to coverage (A2C) support. (#12970)
[Alpha to coverage] (A2C) replaces alpha blending with a
hardware-specific multisample coverage mask when multisample
antialiasing is in use. It's a simple form of [order-independent
transparency] that relies on MSAA. ["Anti-aliased Alpha Test: The
Esoteric Alpha To Coverage"] is a good summary of the motivation for and
best practices relating to A2C.

This commit implements alpha to coverage support as a new variant for
`AlphaMode`. You can supply `AlphaMode::AlphaToCoverage` as the
`alpha_mode` field in `StandardMaterial` to use it. When in use, the
standard material shader automatically applies the texture filtering
method from ["Anti-aliased Alpha Test: The Esoteric Alpha To Coverage"].
Objects with alpha-to-coverage materials are binned in the opaque pass,
as they're fully order-independent.

The `transparency_3d` example has been updated to feature an object with
alpha to coverage. Happily, the example was already using MSAA.

This is part of #2223, as far as I can tell.

[Alpha to coverage]: https://en.wikipedia.org/wiki/Alpha_to_coverage

[order-independent transparency]:
https://en.wikipedia.org/wiki/Order-independent_transparency

["Anti-aliased Alpha Test: The Esoteric Alpha To Coverage"]:
https://bgolus.medium.com/anti-aliased-alpha-test-the-esoteric-alpha-to-coverage-8b177335ae4f

---

## Changelog

### Added

* The `AlphaMode` enum now supports `AlphaToCoverage`, to provide
limited order-independent transparency when multisample antialiasing is
in use.
2024-04-15 20:37:52 +00:00
Patrick Walton
d59b1e71ef
Implement percentage-closer filtering (PCF) for point lights. (#12910)
I ported the two existing PCF techniques to the cubemap domain as best I
could. Generally, the technique is to create a 2D orthonormal basis
using Gram-Schmidt normalization, then apply the technique over that
basis. The results look fine, though the shadow bias often needs
adjusting.

For comparison, Unity uses a 4-tap pattern for PCF on point lights of
(1, 1, 1), (-1, -1, 1), (-1, 1, -1), (1, -1, -1). I tried this but
didn't like the look, so I went with the design above, which ports the
2D techniques to the 3D domain. There's surprisingly little material on
point light PCF.

I've gone through every example using point lights and verified that the
shadow maps look fine, adjusting biases as necessary.

Fixes #3628.

---

## Changelog

### Added
* Shadows from point lights now support percentage-closer filtering
(PCF), and as a result look less aliased.

### Changed
* `ShadowFilteringMethod::Castano13` and
`ShadowFilteringMethod::Jimenez14` have been renamed to
`ShadowFilteringMethod::Gaussian` and `ShadowFilteringMethod::Temporal`
respectively.

## Migration Guide

* `ShadowFilteringMethod::Castano13` and
`ShadowFilteringMethod::Jimenez14` have been renamed to
`ShadowFilteringMethod::Gaussian` and `ShadowFilteringMethod::Temporal`
respectively.
2024-04-10 20:16:08 +00:00
BD103
97131e1909
Move close_on_esc to bevy_dev_tools (#12855)
# Objective

- As @james7132 said [on
Discord](https://discord.com/channels/691052431525675048/692572690833473578/1224626740773523536),
the `close_on_esc` system is forcing `bevy_window` to depend on
`bevy_input`.
- `close_on_esc` is not likely to be used in production, so it arguably
does not have a place in `bevy_window`.

## Solution

- As suggested by @afonsolage, move `close_on_esc` into
`bevy_dev_tools`.
  - Add an example to the documentation too.
- Remove `bevy_window`'s dependency on `bevy_input`.
- Add `bevy_reflect`'s `smol_str` feature to `bevy_window` because it
was implicitly depended upon with `bevy_input` before it was removed.
- Remove any usage of `close_on_esc` from the examples.
- `bevy_dev_tools` is not enabled by default. I personally find it
frustrating to run examples with additional features, so I opted to
remove it entirely.
  - This is up for discussion if you have an alternate solution.

---

## Changelog

- Moved `bevy_window::close_on_esc` to `bevy_dev_tools::close_on_esc`.
- Removed usage of `bevy_dev_tools::close_on_esc` from all examples.

## Migration Guide

`bevy_window::close_on_esc` has been moved to
`bevy_dev_tools::close_on_esc`. You will first need to enable
`bevy_dev_tools` as a feature in your `Cargo.toml`:

```toml
[dependencies]
bevy = { version = "0.14", features = ["bevy_dev_tools"] }
```

Finally, modify any imports to use `bevy_dev_tools` instead:

```rust
// Old:
// use bevy:🪟:close_on_esc;

// New:
use bevy::dev_tools::close_on_esc;

App::new()
    .add_systems(Update, close_on_esc)
    // ...
    .run();
```
2024-04-03 01:29:06 +00:00
Jake
7618884b2f
Fix UV coords in generate_custom_mesh example (#12826)
# Objective
Fix the last coordinate of the top side in the `generate_custom_mesh`
example.
Fixes #12822

## Images
Added a yellow square to the texture to demonstrate as suggested in [the
issue](https://github.com/bevyengine/bevy/issues/12822).

<details>
  <summary>Texture:</summary>
<img
src="https://github.com/bevyengine/bevy/assets/70668395/c605e916-bb02-4247-bf24-2f033c650419"
/>
</details>

<details>
  <summary>Before:</summary>
<img
src="https://github.com/bevyengine/bevy/assets/70668395/e67b592b-8001-42e3-a6ce-7d62ad59bf81"
/>
</details>

<details>
  <summary>After:</summary>
<img
src="https://github.com/bevyengine/bevy/assets/70668395/e9194784-ee4a-4848-87c2-0eb54e05236c"
/>
</details>

## Solution
Change the coordinate from 0.25 to 0.2.
2024-04-01 20:00:45 +00:00
BD103
84363f2fab
Remove redundant imports (#12817)
# Objective

- There are several redundant imports in the tests and examples that are
not caught by CI because additional flags need to be passed.

## Solution

- Run `cargo check --workspace --tests` and `cargo check --workspace
--examples`, then fix all warnings.
- Add `test-check` to CI, which will be run in the check-compiles job.
This should catch future warnings for tests. Examples are already
checked, but I'm not yet sure why they weren't caught.

## Discussion

- Should the `--tests` and `--examples` flags be added to CI, so this is
caught in the future?
- If so, #12818 will need to be merged first. It was also a warning
raised by checking the examples, but I chose to split off into a
separate PR.

---------

Co-authored-by: François Mockers <francois.mockers@vleue.com>
2024-04-01 19:59:08 +00:00
François Mockers
93fd02e8ea
remove DeterministicRenderingConfig (#12811)
# Objective

- Since #12453, `DeterministicRenderingConfig` doesn't do anything

## Solution

- Remove it

---

## Migration Guide

- Removed `DeterministicRenderingConfig`. There shouldn't be any z
fighting anymore in the rendering even without setting
`stable_sort_z_fighting`
2024-04-01 09:32:47 +00:00
andristarr
d39ab55b61
Adding explanation to seeded rng used in examples (#12593)
# Objective

- Fixes #12544

## Solution

- Added/updated a universally worded comment to all seeded rng instances
in our examples.
2024-03-26 19:40:18 +00:00
JMS55
4f20faaa43
Meshlet rendering (initial feature) (#10164)
# Objective
- Implements a more efficient, GPU-driven
(https://github.com/bevyengine/bevy/issues/1342) rendering pipeline
based on meshlets.
- Meshes are split into small clusters of triangles called meshlets,
each of which acts as a mini index buffer into the larger mesh data.
Meshlets can be compressed, streamed, culled, and batched much more
efficiently than monolithic meshes.


![image](https://github.com/bevyengine/bevy/assets/47158642/cb2aaad0-7a9a-4e14-93b0-15d4e895b26a)

![image](https://github.com/bevyengine/bevy/assets/47158642/7534035b-1eb7-4278-9b99-5322e4401715)

# Misc
* Future work: https://github.com/bevyengine/bevy/issues/11518
* Nanite reference:
https://advances.realtimerendering.com/s2021/Karis_Nanite_SIGGRAPH_Advances_2021_final.pdf
Two pass occlusion culling explained very well:
https://medium.com/@mil_kru/two-pass-occlusion-culling-4100edcad501

---------

Co-authored-by: Ricky Taylor <rickytaylor26@gmail.com>
Co-authored-by: vero <email@atlasdostal.com>
Co-authored-by: François <mockersf@gmail.com>
Co-authored-by: atlas dostal <rodol@rivalrebels.com>
2024-03-25 19:08:27 +00:00