2021-09-17 18:00:29 +00:00
|
|
|
//! Types that detect when their internal data mutate.
|
|
|
|
|
2022-11-21 12:59:09 +00:00
|
|
|
use crate::{
|
|
|
|
component::{Tick, TickCells},
|
|
|
|
ptr::PtrMut,
|
|
|
|
system::Resource,
|
|
|
|
};
|
2022-12-27 16:05:16 +00:00
|
|
|
use bevy_ptr::{Ptr, UnsafeCellDeref};
|
2021-05-30 19:29:31 +00:00
|
|
|
use std::ops::{Deref, DerefMut};
|
|
|
|
|
Make change lifespan deterministic and update docs (#3956)
## Objective
- ~~Make absurdly long-lived changes stay detectable for even longer (without leveling up to `u64`).~~
- Give all changes a consistent maximum lifespan.
- Improve code clarity.
## Solution
- ~~Increase the frequency of `check_tick` scans to increase the oldest reliably-detectable change.~~
(Deferred until we can benchmark the cost of a scan.)
- Ignore changes older than the maximum reliably-detectable age.
- General refactoring—name the constants, use them everywhere, and update the docs.
- Update test cases to check for the specified behavior.
## Related
This PR addresses (at least partially) the concerns raised in:
- #3071
- #3082 (and associated PR #3084)
## Background
- #1471
Given the minimum interval between `check_ticks` scans, `N`, the oldest reliably-detectable change is `u32::MAX - (2 * N - 1)` (or `MAX_CHANGE_AGE`). Reducing `N` from ~530 million (current value) to something like ~2 million would extend the lifetime of changes by a billion.
| minimum `check_ticks` interval | oldest reliably-detectable change | usable % of `u32::MAX` |
| --- | --- | --- |
| `u32::MAX / 8` (536,870,911) | `(u32::MAX / 4) * 3` | 75.0% |
| `2_000_000` | `u32::MAX - 3_999_999` | 99.9% |
Similarly, changes are still allowed to be between `MAX_CHANGE_AGE`-old and `u32::MAX`-old in the interim between `check_tick` scans. While we prevent their age from overflowing, the test to detect changes still compares raw values. This makes failure ultimately unreliable, since when ancient changes stop being detected varies depending on when the next scan occurs.
## Open Question
Currently, systems and system states are incorrectly initialized with their `last_change_tick` set to `0`, which doesn't handle wraparound correctly.
For consistent behavior, they should either be initialized to the world's `last_change_tick` (and detect no changes) or to `MAX_CHANGE_AGE` behind the world's current `change_tick` (and detect everything as a change). I've currently gone with the latter since that was closer to the existing behavior.
## Follow-up Work
(Edited: entire section)
We haven't actually profiled how long a `check_ticks` scan takes on a "large" `World` , so we don't know if it's safe to increase their frequency. However, we are currently relying on play sessions not lasting long enough to trigger a scan and apps not having enough entities/archetypes for it to be "expensive" (our assumption). That isn't a real solution. (Either scanning never costs enough to impact frame times or we provide an option to use `u64` change ticks. Nobody will accept random hiccups.)
To further extend the lifetime of changes, we actually only need to increment the world tick if a system has `Fetch: !ReadOnlySystemParamFetch`. The behavior will be identical because all writes are sequenced, but I'm not sure how to implement that in a way that the compiler can optimize the branch out.
Also, since having no false positives depends on a `check_ticks` scan running at least every `2 * N - 1` ticks, a `last_check_tick` should also be stored in the `World` so that any lull in system execution (like a command flush) could trigger a scan if needed. To be completely robust, all the systems initialized on the world should be scanned, not just those in the current stage.
2022-05-09 14:00:16 +00:00
|
|
|
/// The (arbitrarily chosen) minimum number of world tick increments between `check_tick` scans.
|
|
|
|
///
|
|
|
|
/// Change ticks can only be scanned when systems aren't running. Thus, if the threshold is `N`,
|
|
|
|
/// the maximum is `2 * N - 1` (i.e. the world ticks `N - 1` times, then `N` times).
|
|
|
|
///
|
|
|
|
/// If no change is older than `u32::MAX - (2 * N - 1)` following a scan, none of their ages can
|
|
|
|
/// overflow and cause false positives.
|
|
|
|
// (518,400,000 = 1000 ticks per frame * 144 frames per second * 3600 seconds per hour)
|
|
|
|
pub const CHECK_TICK_THRESHOLD: u32 = 518_400_000;
|
|
|
|
|
|
|
|
/// The maximum change tick difference that won't overflow before the next `check_tick` scan.
|
|
|
|
///
|
|
|
|
/// Changes stop being detected once they become this old.
|
|
|
|
pub const MAX_CHANGE_AGE: u32 = u32::MAX - (2 * CHECK_TICK_THRESHOLD - 1);
|
|
|
|
|
2021-05-30 19:29:31 +00:00
|
|
|
/// Types that implement reliable change detection.
|
|
|
|
///
|
|
|
|
/// ## Example
|
|
|
|
/// Using types that implement [`DetectChanges`], such as [`ResMut`], provide
|
|
|
|
/// a way to query if a value has been mutated in another system.
|
|
|
|
/// Normally change detecting is triggered by either [`DerefMut`] or [`AsMut`], however
|
|
|
|
/// it can be manually triggered via [`DetectChanges::set_changed`].
|
|
|
|
///
|
2022-12-11 19:24:19 +00:00
|
|
|
/// To ensure that changes are only triggered when the value actually differs,
|
|
|
|
/// check if the value would change before assignment, such as by checking that `new != old`.
|
|
|
|
/// You must be *sure* that you are not mutably derefencing in this process.
|
|
|
|
///
|
|
|
|
/// [`set_if_neq`](DetectChanges::set_if_neq) is a helper
|
|
|
|
/// method for this common functionality.
|
|
|
|
///
|
2021-05-30 19:29:31 +00:00
|
|
|
/// ```
|
|
|
|
/// use bevy_ecs::prelude::*;
|
|
|
|
///
|
Make `Resource` trait opt-in, requiring `#[derive(Resource)]` V2 (#5577)
*This PR description is an edited copy of #5007, written by @alice-i-cecile.*
# Objective
Follow-up to https://github.com/bevyengine/bevy/pull/2254. The `Resource` trait currently has a blanket implementation for all types that meet its bounds.
While ergonomic, this results in several drawbacks:
* it is possible to make confusing, silent mistakes such as inserting a function pointer (Foo) rather than a value (Foo::Bar) as a resource
* it is challenging to discover if a type is intended to be used as a resource
* we cannot later add customization options (see the [RFC](https://github.com/bevyengine/rfcs/blob/main/rfcs/27-derive-component.md) for the equivalent choice for Component).
* dependencies can use the same Rust type as a resource in invisibly conflicting ways
* raw Rust types used as resources cannot preserve privacy appropriately, as anyone able to access that type can read and write to internal values
* we cannot capture a definitive list of possible resources to display to users in an editor
## Notes to reviewers
* Review this commit-by-commit; there's effectively no back-tracking and there's a lot of churn in some of these commits.
*ira: My commits are not as well organized :')*
* I've relaxed the bound on Local to Send + Sync + 'static: I don't think these concerns apply there, so this can keep things simple. Storing e.g. a u32 in a Local is fine, because there's a variable name attached explaining what it does.
* I think this is a bad place for the Resource trait to live, but I've left it in place to make reviewing easier. IMO that's best tackled with https://github.com/bevyengine/bevy/issues/4981.
## Changelog
`Resource` is no longer automatically implemented for all matching types. Instead, use the new `#[derive(Resource)]` macro.
## Migration Guide
Add `#[derive(Resource)]` to all types you are using as a resource.
If you are using a third party type as a resource, wrap it in a tuple struct to bypass orphan rules. Consider deriving `Deref` and `DerefMut` to improve ergonomics.
`ClearColor` no longer implements `Component`. Using `ClearColor` as a component in 0.8 did nothing.
Use the `ClearColorConfig` in the `Camera3d` and `Camera2d` components instead.
Co-authored-by: Alice <alice.i.cecile@gmail.com>
Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
Co-authored-by: devil-ira <justthecooldude@gmail.com>
Co-authored-by: Carter Anderson <mcanders1@gmail.com>
2022-08-08 21:36:35 +00:00
|
|
|
/// #[derive(Resource)]
|
2021-05-30 19:29:31 +00:00
|
|
|
/// struct MyResource(u32);
|
|
|
|
///
|
|
|
|
/// fn my_system(mut resource: ResMut<MyResource>) {
|
|
|
|
/// if resource.is_changed() {
|
|
|
|
/// println!("My resource was mutated!");
|
|
|
|
/// }
|
|
|
|
///
|
|
|
|
/// resource.0 = 42; // triggers change detection via [`DerefMut`]
|
|
|
|
/// }
|
|
|
|
/// ```
|
|
|
|
///
|
|
|
|
pub trait DetectChanges {
|
2022-09-09 16:26:52 +00:00
|
|
|
/// The type contained within this smart pointer
|
|
|
|
///
|
|
|
|
/// For example, for `Res<T>` this would be `T`.
|
2022-09-09 21:26:36 +00:00
|
|
|
type Inner: ?Sized;
|
2022-09-09 16:26:52 +00:00
|
|
|
|
Make change lifespan deterministic and update docs (#3956)
## Objective
- ~~Make absurdly long-lived changes stay detectable for even longer (without leveling up to `u64`).~~
- Give all changes a consistent maximum lifespan.
- Improve code clarity.
## Solution
- ~~Increase the frequency of `check_tick` scans to increase the oldest reliably-detectable change.~~
(Deferred until we can benchmark the cost of a scan.)
- Ignore changes older than the maximum reliably-detectable age.
- General refactoring—name the constants, use them everywhere, and update the docs.
- Update test cases to check for the specified behavior.
## Related
This PR addresses (at least partially) the concerns raised in:
- #3071
- #3082 (and associated PR #3084)
## Background
- #1471
Given the minimum interval between `check_ticks` scans, `N`, the oldest reliably-detectable change is `u32::MAX - (2 * N - 1)` (or `MAX_CHANGE_AGE`). Reducing `N` from ~530 million (current value) to something like ~2 million would extend the lifetime of changes by a billion.
| minimum `check_ticks` interval | oldest reliably-detectable change | usable % of `u32::MAX` |
| --- | --- | --- |
| `u32::MAX / 8` (536,870,911) | `(u32::MAX / 4) * 3` | 75.0% |
| `2_000_000` | `u32::MAX - 3_999_999` | 99.9% |
Similarly, changes are still allowed to be between `MAX_CHANGE_AGE`-old and `u32::MAX`-old in the interim between `check_tick` scans. While we prevent their age from overflowing, the test to detect changes still compares raw values. This makes failure ultimately unreliable, since when ancient changes stop being detected varies depending on when the next scan occurs.
## Open Question
Currently, systems and system states are incorrectly initialized with their `last_change_tick` set to `0`, which doesn't handle wraparound correctly.
For consistent behavior, they should either be initialized to the world's `last_change_tick` (and detect no changes) or to `MAX_CHANGE_AGE` behind the world's current `change_tick` (and detect everything as a change). I've currently gone with the latter since that was closer to the existing behavior.
## Follow-up Work
(Edited: entire section)
We haven't actually profiled how long a `check_ticks` scan takes on a "large" `World` , so we don't know if it's safe to increase their frequency. However, we are currently relying on play sessions not lasting long enough to trigger a scan and apps not having enough entities/archetypes for it to be "expensive" (our assumption). That isn't a real solution. (Either scanning never costs enough to impact frame times or we provide an option to use `u64` change ticks. Nobody will accept random hiccups.)
To further extend the lifetime of changes, we actually only need to increment the world tick if a system has `Fetch: !ReadOnlySystemParamFetch`. The behavior will be identical because all writes are sequenced, but I'm not sure how to implement that in a way that the compiler can optimize the branch out.
Also, since having no false positives depends on a `check_ticks` scan running at least every `2 * N - 1` ticks, a `last_check_tick` should also be stored in the `World` so that any lull in system execution (like a command flush) could trigger a scan if needed. To be completely robust, all the systems initialized on the world should be scanned, not just those in the current stage.
2022-05-09 14:00:16 +00:00
|
|
|
/// Returns `true` if this value was added after the system last ran.
|
2021-05-30 19:29:31 +00:00
|
|
|
fn is_added(&self) -> bool;
|
|
|
|
|
Make change lifespan deterministic and update docs (#3956)
## Objective
- ~~Make absurdly long-lived changes stay detectable for even longer (without leveling up to `u64`).~~
- Give all changes a consistent maximum lifespan.
- Improve code clarity.
## Solution
- ~~Increase the frequency of `check_tick` scans to increase the oldest reliably-detectable change.~~
(Deferred until we can benchmark the cost of a scan.)
- Ignore changes older than the maximum reliably-detectable age.
- General refactoring—name the constants, use them everywhere, and update the docs.
- Update test cases to check for the specified behavior.
## Related
This PR addresses (at least partially) the concerns raised in:
- #3071
- #3082 (and associated PR #3084)
## Background
- #1471
Given the minimum interval between `check_ticks` scans, `N`, the oldest reliably-detectable change is `u32::MAX - (2 * N - 1)` (or `MAX_CHANGE_AGE`). Reducing `N` from ~530 million (current value) to something like ~2 million would extend the lifetime of changes by a billion.
| minimum `check_ticks` interval | oldest reliably-detectable change | usable % of `u32::MAX` |
| --- | --- | --- |
| `u32::MAX / 8` (536,870,911) | `(u32::MAX / 4) * 3` | 75.0% |
| `2_000_000` | `u32::MAX - 3_999_999` | 99.9% |
Similarly, changes are still allowed to be between `MAX_CHANGE_AGE`-old and `u32::MAX`-old in the interim between `check_tick` scans. While we prevent their age from overflowing, the test to detect changes still compares raw values. This makes failure ultimately unreliable, since when ancient changes stop being detected varies depending on when the next scan occurs.
## Open Question
Currently, systems and system states are incorrectly initialized with their `last_change_tick` set to `0`, which doesn't handle wraparound correctly.
For consistent behavior, they should either be initialized to the world's `last_change_tick` (and detect no changes) or to `MAX_CHANGE_AGE` behind the world's current `change_tick` (and detect everything as a change). I've currently gone with the latter since that was closer to the existing behavior.
## Follow-up Work
(Edited: entire section)
We haven't actually profiled how long a `check_ticks` scan takes on a "large" `World` , so we don't know if it's safe to increase their frequency. However, we are currently relying on play sessions not lasting long enough to trigger a scan and apps not having enough entities/archetypes for it to be "expensive" (our assumption). That isn't a real solution. (Either scanning never costs enough to impact frame times or we provide an option to use `u64` change ticks. Nobody will accept random hiccups.)
To further extend the lifetime of changes, we actually only need to increment the world tick if a system has `Fetch: !ReadOnlySystemParamFetch`. The behavior will be identical because all writes are sequenced, but I'm not sure how to implement that in a way that the compiler can optimize the branch out.
Also, since having no false positives depends on a `check_ticks` scan running at least every `2 * N - 1` ticks, a `last_check_tick` should also be stored in the `World` so that any lull in system execution (like a command flush) could trigger a scan if needed. To be completely robust, all the systems initialized on the world should be scanned, not just those in the current stage.
2022-05-09 14:00:16 +00:00
|
|
|
/// Returns `true` if this value was added or mutably dereferenced after the system last ran.
|
2021-05-30 19:29:31 +00:00
|
|
|
fn is_changed(&self) -> bool;
|
|
|
|
|
Make change lifespan deterministic and update docs (#3956)
## Objective
- ~~Make absurdly long-lived changes stay detectable for even longer (without leveling up to `u64`).~~
- Give all changes a consistent maximum lifespan.
- Improve code clarity.
## Solution
- ~~Increase the frequency of `check_tick` scans to increase the oldest reliably-detectable change.~~
(Deferred until we can benchmark the cost of a scan.)
- Ignore changes older than the maximum reliably-detectable age.
- General refactoring—name the constants, use them everywhere, and update the docs.
- Update test cases to check for the specified behavior.
## Related
This PR addresses (at least partially) the concerns raised in:
- #3071
- #3082 (and associated PR #3084)
## Background
- #1471
Given the minimum interval between `check_ticks` scans, `N`, the oldest reliably-detectable change is `u32::MAX - (2 * N - 1)` (or `MAX_CHANGE_AGE`). Reducing `N` from ~530 million (current value) to something like ~2 million would extend the lifetime of changes by a billion.
| minimum `check_ticks` interval | oldest reliably-detectable change | usable % of `u32::MAX` |
| --- | --- | --- |
| `u32::MAX / 8` (536,870,911) | `(u32::MAX / 4) * 3` | 75.0% |
| `2_000_000` | `u32::MAX - 3_999_999` | 99.9% |
Similarly, changes are still allowed to be between `MAX_CHANGE_AGE`-old and `u32::MAX`-old in the interim between `check_tick` scans. While we prevent their age from overflowing, the test to detect changes still compares raw values. This makes failure ultimately unreliable, since when ancient changes stop being detected varies depending on when the next scan occurs.
## Open Question
Currently, systems and system states are incorrectly initialized with their `last_change_tick` set to `0`, which doesn't handle wraparound correctly.
For consistent behavior, they should either be initialized to the world's `last_change_tick` (and detect no changes) or to `MAX_CHANGE_AGE` behind the world's current `change_tick` (and detect everything as a change). I've currently gone with the latter since that was closer to the existing behavior.
## Follow-up Work
(Edited: entire section)
We haven't actually profiled how long a `check_ticks` scan takes on a "large" `World` , so we don't know if it's safe to increase their frequency. However, we are currently relying on play sessions not lasting long enough to trigger a scan and apps not having enough entities/archetypes for it to be "expensive" (our assumption). That isn't a real solution. (Either scanning never costs enough to impact frame times or we provide an option to use `u64` change ticks. Nobody will accept random hiccups.)
To further extend the lifetime of changes, we actually only need to increment the world tick if a system has `Fetch: !ReadOnlySystemParamFetch`. The behavior will be identical because all writes are sequenced, but I'm not sure how to implement that in a way that the compiler can optimize the branch out.
Also, since having no false positives depends on a `check_ticks` scan running at least every `2 * N - 1` ticks, a `last_check_tick` should also be stored in the `World` so that any lull in system execution (like a command flush) could trigger a scan if needed. To be completely robust, all the systems initialized on the world should be scanned, not just those in the current stage.
2022-05-09 14:00:16 +00:00
|
|
|
/// Flags this value as having been changed.
|
2021-05-30 19:29:31 +00:00
|
|
|
///
|
Make change lifespan deterministic and update docs (#3956)
## Objective
- ~~Make absurdly long-lived changes stay detectable for even longer (without leveling up to `u64`).~~
- Give all changes a consistent maximum lifespan.
- Improve code clarity.
## Solution
- ~~Increase the frequency of `check_tick` scans to increase the oldest reliably-detectable change.~~
(Deferred until we can benchmark the cost of a scan.)
- Ignore changes older than the maximum reliably-detectable age.
- General refactoring—name the constants, use them everywhere, and update the docs.
- Update test cases to check for the specified behavior.
## Related
This PR addresses (at least partially) the concerns raised in:
- #3071
- #3082 (and associated PR #3084)
## Background
- #1471
Given the minimum interval between `check_ticks` scans, `N`, the oldest reliably-detectable change is `u32::MAX - (2 * N - 1)` (or `MAX_CHANGE_AGE`). Reducing `N` from ~530 million (current value) to something like ~2 million would extend the lifetime of changes by a billion.
| minimum `check_ticks` interval | oldest reliably-detectable change | usable % of `u32::MAX` |
| --- | --- | --- |
| `u32::MAX / 8` (536,870,911) | `(u32::MAX / 4) * 3` | 75.0% |
| `2_000_000` | `u32::MAX - 3_999_999` | 99.9% |
Similarly, changes are still allowed to be between `MAX_CHANGE_AGE`-old and `u32::MAX`-old in the interim between `check_tick` scans. While we prevent their age from overflowing, the test to detect changes still compares raw values. This makes failure ultimately unreliable, since when ancient changes stop being detected varies depending on when the next scan occurs.
## Open Question
Currently, systems and system states are incorrectly initialized with their `last_change_tick` set to `0`, which doesn't handle wraparound correctly.
For consistent behavior, they should either be initialized to the world's `last_change_tick` (and detect no changes) or to `MAX_CHANGE_AGE` behind the world's current `change_tick` (and detect everything as a change). I've currently gone with the latter since that was closer to the existing behavior.
## Follow-up Work
(Edited: entire section)
We haven't actually profiled how long a `check_ticks` scan takes on a "large" `World` , so we don't know if it's safe to increase their frequency. However, we are currently relying on play sessions not lasting long enough to trigger a scan and apps not having enough entities/archetypes for it to be "expensive" (our assumption). That isn't a real solution. (Either scanning never costs enough to impact frame times or we provide an option to use `u64` change ticks. Nobody will accept random hiccups.)
To further extend the lifetime of changes, we actually only need to increment the world tick if a system has `Fetch: !ReadOnlySystemParamFetch`. The behavior will be identical because all writes are sequenced, but I'm not sure how to implement that in a way that the compiler can optimize the branch out.
Also, since having no false positives depends on a `check_ticks` scan running at least every `2 * N - 1` ticks, a `last_check_tick` should also be stored in the `World` so that any lull in system execution (like a command flush) could trigger a scan if needed. To be completely robust, all the systems initialized on the world should be scanned, not just those in the current stage.
2022-05-09 14:00:16 +00:00
|
|
|
/// Mutably accessing this smart pointer will automatically flag this value as having been changed.
|
|
|
|
/// However, mutation through interior mutability requires manual reporting.
|
|
|
|
///
|
|
|
|
/// **Note**: This operation cannot be undone.
|
2021-05-30 19:29:31 +00:00
|
|
|
fn set_changed(&mut self);
|
2022-05-02 18:26:52 +00:00
|
|
|
|
2022-09-09 16:26:52 +00:00
|
|
|
/// Returns the change tick recording the previous time this data was changed.
|
2022-05-02 18:26:52 +00:00
|
|
|
///
|
|
|
|
/// Note that components and resources are also marked as changed upon insertion.
|
|
|
|
///
|
|
|
|
/// For comparison, the previous change tick of a system can be read using the
|
|
|
|
/// [`SystemChangeTick`](crate::system::SystemChangeTick)
|
|
|
|
/// [`SystemParam`](crate::system::SystemParam).
|
|
|
|
fn last_changed(&self) -> u32;
|
2022-09-09 16:26:52 +00:00
|
|
|
|
|
|
|
/// Manually sets the change tick recording the previous time this data was mutated.
|
|
|
|
///
|
|
|
|
/// # Warning
|
|
|
|
/// This is a complex and error-prone operation, primarily intended for use with rollback networking strategies.
|
|
|
|
/// If you merely want to flag this data as changed, use [`set_changed`](DetectChanges::set_changed) instead.
|
|
|
|
/// If you want to avoid triggering change detection, use [`bypass_change_detection`](DetectChanges::bypass_change_detection) instead.
|
|
|
|
fn set_last_changed(&mut self, last_change_tick: u32);
|
|
|
|
|
|
|
|
/// Manually bypasses change detection, allowing you to mutate the underlying value without updating the change tick.
|
|
|
|
///
|
|
|
|
/// # Warning
|
|
|
|
/// This is a risky operation, that can have unexpected consequences on any system relying on this code.
|
|
|
|
/// However, it can be an essential escape hatch when, for example,
|
|
|
|
/// you are trying to synchronize representations using change detection and need to avoid infinite recursion.
|
|
|
|
fn bypass_change_detection(&mut self) -> &mut Self::Inner;
|
2022-12-11 19:24:19 +00:00
|
|
|
|
|
|
|
/// Sets `self` to `value`, if and only if `*self != *value`
|
|
|
|
///
|
|
|
|
/// `T` is the type stored within the smart pointer (e.g. [`Mut`] or [`ResMut`]).
|
|
|
|
///
|
|
|
|
/// This is useful to ensure change detection is only triggered when the underlying value
|
|
|
|
/// changes, instead of every time [`DerefMut`] is used.
|
|
|
|
fn set_if_neq<Target>(&mut self, value: Target)
|
|
|
|
where
|
|
|
|
Self: Deref<Target = Target> + DerefMut<Target = Target>,
|
|
|
|
Target: PartialEq;
|
2021-05-30 19:29:31 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
macro_rules! change_detection_impl {
|
|
|
|
($name:ident < $( $generics:tt ),+ >, $target:ty, $($traits:ident)?) => {
|
2022-09-09 21:26:36 +00:00
|
|
|
impl<$($generics),* : ?Sized $(+ $traits)?> DetectChanges for $name<$($generics),*> {
|
2022-09-09 16:26:52 +00:00
|
|
|
type Inner = $target;
|
|
|
|
|
2021-05-30 19:29:31 +00:00
|
|
|
#[inline]
|
|
|
|
fn is_added(&self) -> bool {
|
|
|
|
self.ticks
|
2022-11-21 12:59:09 +00:00
|
|
|
.added
|
|
|
|
.is_older_than(self.ticks.last_change_tick, self.ticks.change_tick)
|
2021-05-30 19:29:31 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
#[inline]
|
|
|
|
fn is_changed(&self) -> bool {
|
|
|
|
self.ticks
|
2022-11-21 12:59:09 +00:00
|
|
|
.changed
|
|
|
|
.is_older_than(self.ticks.last_change_tick, self.ticks.change_tick)
|
2021-05-30 19:29:31 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
#[inline]
|
|
|
|
fn set_changed(&mut self) {
|
|
|
|
self.ticks
|
2022-11-21 12:59:09 +00:00
|
|
|
.changed
|
2021-05-30 19:29:31 +00:00
|
|
|
.set_changed(self.ticks.change_tick);
|
|
|
|
}
|
2022-05-02 18:26:52 +00:00
|
|
|
|
|
|
|
#[inline]
|
|
|
|
fn last_changed(&self) -> u32 {
|
|
|
|
self.ticks.last_change_tick
|
|
|
|
}
|
2022-09-09 16:26:52 +00:00
|
|
|
|
|
|
|
#[inline]
|
|
|
|
fn set_last_changed(&mut self, last_change_tick: u32) {
|
|
|
|
self.ticks.last_change_tick = last_change_tick
|
|
|
|
}
|
|
|
|
|
|
|
|
#[inline]
|
|
|
|
fn bypass_change_detection(&mut self) -> &mut Self::Inner {
|
|
|
|
self.value
|
|
|
|
}
|
2022-12-11 19:24:19 +00:00
|
|
|
|
|
|
|
#[inline]
|
|
|
|
fn set_if_neq<Target>(&mut self, value: Target)
|
|
|
|
where
|
|
|
|
Self: Deref<Target = Target> + DerefMut<Target = Target>,
|
|
|
|
Target: PartialEq,
|
|
|
|
{
|
|
|
|
// This dereference is immutable, so does not trigger change detection
|
|
|
|
if *<Self as Deref>::deref(self) != value {
|
|
|
|
// `DerefMut` usage triggers change detection
|
|
|
|
*<Self as DerefMut>::deref_mut(self) = value;
|
|
|
|
}
|
|
|
|
}
|
2021-05-30 19:29:31 +00:00
|
|
|
}
|
|
|
|
|
2022-09-09 21:26:36 +00:00
|
|
|
impl<$($generics),*: ?Sized $(+ $traits)?> Deref for $name<$($generics),*> {
|
2021-05-30 19:29:31 +00:00
|
|
|
type Target = $target;
|
|
|
|
|
|
|
|
#[inline]
|
|
|
|
fn deref(&self) -> &Self::Target {
|
|
|
|
self.value
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2022-09-09 21:26:36 +00:00
|
|
|
impl<$($generics),* : ?Sized $(+ $traits)?> DerefMut for $name<$($generics),*> {
|
2021-05-30 19:29:31 +00:00
|
|
|
#[inline]
|
|
|
|
fn deref_mut(&mut self) -> &mut Self::Target {
|
|
|
|
self.set_changed();
|
|
|
|
self.value
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
impl<$($generics),* $(: $traits)?> AsRef<$target> for $name<$($generics),*> {
|
|
|
|
#[inline]
|
|
|
|
fn as_ref(&self) -> &$target {
|
|
|
|
self.deref()
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
impl<$($generics),* $(: $traits)?> AsMut<$target> for $name<$($generics),*> {
|
|
|
|
#[inline]
|
|
|
|
fn as_mut(&mut self) -> &mut $target {
|
|
|
|
self.deref_mut()
|
|
|
|
}
|
|
|
|
}
|
|
|
|
};
|
|
|
|
}
|
|
|
|
|
2022-10-10 17:06:31 +00:00
|
|
|
macro_rules! impl_methods {
|
2021-05-30 19:29:31 +00:00
|
|
|
($name:ident < $( $generics:tt ),+ >, $target:ty, $($traits:ident)?) => {
|
2022-09-09 21:26:36 +00:00
|
|
|
impl<$($generics),* : ?Sized $(+ $traits)?> $name<$($generics),*> {
|
2021-05-30 19:29:31 +00:00
|
|
|
/// Consume `self` and return a mutable reference to the
|
|
|
|
/// contained value while marking `self` as "changed".
|
|
|
|
#[inline]
|
|
|
|
pub fn into_inner(mut self) -> &'a mut $target {
|
|
|
|
self.set_changed();
|
|
|
|
self.value
|
|
|
|
}
|
2022-10-10 17:06:31 +00:00
|
|
|
|
|
|
|
/// Maps to an inner value by applying a function to the contained reference, without flagging a change.
|
|
|
|
///
|
|
|
|
/// You should never modify the argument passed to the closure -- if you want to modify the data
|
|
|
|
/// without flagging a change, consider using [`DetectChanges::bypass_change_detection`] to make your intent explicit.
|
|
|
|
///
|
|
|
|
/// ```rust
|
|
|
|
/// # use bevy_ecs::prelude::*;
|
|
|
|
/// # pub struct Vec2;
|
|
|
|
/// # impl Vec2 { pub const ZERO: Self = Self; }
|
|
|
|
/// # #[derive(Component)] pub struct Transform { translation: Vec2 }
|
|
|
|
/// # mod my_utils {
|
|
|
|
/// # pub fn set_if_not_equal<T>(x: bevy_ecs::prelude::Mut<T>, val: T) { unimplemented!() }
|
|
|
|
/// # }
|
|
|
|
/// // When run, zeroes the translation of every entity.
|
|
|
|
/// fn reset_positions(mut transforms: Query<&mut Transform>) {
|
|
|
|
/// for transform in &mut transforms {
|
|
|
|
/// // We pinky promise not to modify `t` within the closure.
|
|
|
|
/// // Breaking this promise will result in logic errors, but will never cause undefined behavior.
|
|
|
|
/// let translation = transform.map_unchanged(|t| &mut t.translation);
|
|
|
|
/// // Only reset the translation if it isn't already zero;
|
|
|
|
/// my_utils::set_if_not_equal(translation, Vec2::ZERO);
|
|
|
|
/// }
|
|
|
|
/// }
|
|
|
|
/// # bevy_ecs::system::assert_is_system(reset_positions);
|
|
|
|
/// ```
|
|
|
|
pub fn map_unchanged<U: ?Sized>(self, f: impl FnOnce(&mut $target) -> &mut U) -> Mut<'a, U> {
|
|
|
|
Mut {
|
|
|
|
value: f(self.value),
|
|
|
|
ticks: self.ticks,
|
|
|
|
}
|
|
|
|
}
|
2021-05-30 19:29:31 +00:00
|
|
|
}
|
|
|
|
};
|
|
|
|
}
|
|
|
|
|
|
|
|
macro_rules! impl_debug {
|
|
|
|
($name:ident < $( $generics:tt ),+ >, $($traits:ident)?) => {
|
2022-09-09 21:26:36 +00:00
|
|
|
impl<$($generics),* : ?Sized $(+ $traits)?> std::fmt::Debug for $name<$($generics),*>
|
2021-05-30 19:29:31 +00:00
|
|
|
where T: std::fmt::Debug
|
|
|
|
{
|
|
|
|
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
|
|
|
|
f.debug_tuple(stringify!($name))
|
2022-09-09 21:26:36 +00:00
|
|
|
.field(&self.value)
|
2021-05-30 19:29:31 +00:00
|
|
|
.finish()
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
};
|
|
|
|
}
|
|
|
|
|
|
|
|
pub(crate) struct Ticks<'a> {
|
2022-11-21 12:59:09 +00:00
|
|
|
pub(crate) added: &'a mut Tick,
|
|
|
|
pub(crate) changed: &'a mut Tick,
|
2021-05-30 19:29:31 +00:00
|
|
|
pub(crate) last_change_tick: u32,
|
|
|
|
pub(crate) change_tick: u32,
|
|
|
|
}
|
|
|
|
|
2022-11-21 12:59:09 +00:00
|
|
|
impl<'a> Ticks<'a> {
|
|
|
|
/// # Safety
|
|
|
|
/// This should never alias the underlying ticks. All access must be unique.
|
|
|
|
#[inline]
|
|
|
|
pub(crate) unsafe fn from_tick_cells(
|
|
|
|
cells: TickCells<'a>,
|
|
|
|
last_change_tick: u32,
|
|
|
|
change_tick: u32,
|
|
|
|
) -> Self {
|
|
|
|
Self {
|
|
|
|
added: cells.added.deref_mut(),
|
|
|
|
changed: cells.changed.deref_mut(),
|
|
|
|
last_change_tick,
|
|
|
|
change_tick,
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
Make `Resource` trait opt-in, requiring `#[derive(Resource)]` V2 (#5577)
*This PR description is an edited copy of #5007, written by @alice-i-cecile.*
# Objective
Follow-up to https://github.com/bevyengine/bevy/pull/2254. The `Resource` trait currently has a blanket implementation for all types that meet its bounds.
While ergonomic, this results in several drawbacks:
* it is possible to make confusing, silent mistakes such as inserting a function pointer (Foo) rather than a value (Foo::Bar) as a resource
* it is challenging to discover if a type is intended to be used as a resource
* we cannot later add customization options (see the [RFC](https://github.com/bevyengine/rfcs/blob/main/rfcs/27-derive-component.md) for the equivalent choice for Component).
* dependencies can use the same Rust type as a resource in invisibly conflicting ways
* raw Rust types used as resources cannot preserve privacy appropriately, as anyone able to access that type can read and write to internal values
* we cannot capture a definitive list of possible resources to display to users in an editor
## Notes to reviewers
* Review this commit-by-commit; there's effectively no back-tracking and there's a lot of churn in some of these commits.
*ira: My commits are not as well organized :')*
* I've relaxed the bound on Local to Send + Sync + 'static: I don't think these concerns apply there, so this can keep things simple. Storing e.g. a u32 in a Local is fine, because there's a variable name attached explaining what it does.
* I think this is a bad place for the Resource trait to live, but I've left it in place to make reviewing easier. IMO that's best tackled with https://github.com/bevyengine/bevy/issues/4981.
## Changelog
`Resource` is no longer automatically implemented for all matching types. Instead, use the new `#[derive(Resource)]` macro.
## Migration Guide
Add `#[derive(Resource)]` to all types you are using as a resource.
If you are using a third party type as a resource, wrap it in a tuple struct to bypass orphan rules. Consider deriving `Deref` and `DerefMut` to improve ergonomics.
`ClearColor` no longer implements `Component`. Using `ClearColor` as a component in 0.8 did nothing.
Use the `ClearColorConfig` in the `Camera3d` and `Camera2d` components instead.
Co-authored-by: Alice <alice.i.cecile@gmail.com>
Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
Co-authored-by: devil-ira <justthecooldude@gmail.com>
Co-authored-by: Carter Anderson <mcanders1@gmail.com>
2022-08-08 21:36:35 +00:00
|
|
|
/// Unique mutable borrow of a [`Resource`].
|
2021-05-30 19:29:31 +00:00
|
|
|
///
|
Make `Resource` trait opt-in, requiring `#[derive(Resource)]` V2 (#5577)
*This PR description is an edited copy of #5007, written by @alice-i-cecile.*
# Objective
Follow-up to https://github.com/bevyengine/bevy/pull/2254. The `Resource` trait currently has a blanket implementation for all types that meet its bounds.
While ergonomic, this results in several drawbacks:
* it is possible to make confusing, silent mistakes such as inserting a function pointer (Foo) rather than a value (Foo::Bar) as a resource
* it is challenging to discover if a type is intended to be used as a resource
* we cannot later add customization options (see the [RFC](https://github.com/bevyengine/rfcs/blob/main/rfcs/27-derive-component.md) for the equivalent choice for Component).
* dependencies can use the same Rust type as a resource in invisibly conflicting ways
* raw Rust types used as resources cannot preserve privacy appropriately, as anyone able to access that type can read and write to internal values
* we cannot capture a definitive list of possible resources to display to users in an editor
## Notes to reviewers
* Review this commit-by-commit; there's effectively no back-tracking and there's a lot of churn in some of these commits.
*ira: My commits are not as well organized :')*
* I've relaxed the bound on Local to Send + Sync + 'static: I don't think these concerns apply there, so this can keep things simple. Storing e.g. a u32 in a Local is fine, because there's a variable name attached explaining what it does.
* I think this is a bad place for the Resource trait to live, but I've left it in place to make reviewing easier. IMO that's best tackled with https://github.com/bevyengine/bevy/issues/4981.
## Changelog
`Resource` is no longer automatically implemented for all matching types. Instead, use the new `#[derive(Resource)]` macro.
## Migration Guide
Add `#[derive(Resource)]` to all types you are using as a resource.
If you are using a third party type as a resource, wrap it in a tuple struct to bypass orphan rules. Consider deriving `Deref` and `DerefMut` to improve ergonomics.
`ClearColor` no longer implements `Component`. Using `ClearColor` as a component in 0.8 did nothing.
Use the `ClearColorConfig` in the `Camera3d` and `Camera2d` components instead.
Co-authored-by: Alice <alice.i.cecile@gmail.com>
Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
Co-authored-by: devil-ira <justthecooldude@gmail.com>
Co-authored-by: Carter Anderson <mcanders1@gmail.com>
2022-08-08 21:36:35 +00:00
|
|
|
/// See the [`Resource`] documentation for usage.
|
2021-09-17 18:00:29 +00:00
|
|
|
///
|
|
|
|
/// If you need a shared borrow, use [`Res`](crate::system::Res) instead.
|
|
|
|
///
|
2021-05-30 19:29:31 +00:00
|
|
|
/// # Panics
|
|
|
|
///
|
2021-09-17 18:00:29 +00:00
|
|
|
/// Panics when used as a [`SystemParam`](crate::system::SystemParam) if the resource does not exist.
|
2021-05-30 19:29:31 +00:00
|
|
|
///
|
|
|
|
/// Use `Option<ResMut<T>>` instead if the resource might not always exist.
|
2022-09-09 21:26:36 +00:00
|
|
|
pub struct ResMut<'a, T: ?Sized + Resource> {
|
2021-05-30 19:29:31 +00:00
|
|
|
pub(crate) value: &'a mut T,
|
|
|
|
pub(crate) ticks: Ticks<'a>,
|
|
|
|
}
|
|
|
|
|
2022-10-24 21:01:08 +00:00
|
|
|
impl<'w, 'a, T: Resource> IntoIterator for &'a ResMut<'w, T>
|
|
|
|
where
|
|
|
|
&'a T: IntoIterator,
|
|
|
|
{
|
|
|
|
type Item = <&'a T as IntoIterator>::Item;
|
|
|
|
type IntoIter = <&'a T as IntoIterator>::IntoIter;
|
|
|
|
|
|
|
|
fn into_iter(self) -> Self::IntoIter {
|
|
|
|
self.value.into_iter()
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
impl<'w, 'a, T: Resource> IntoIterator for &'a mut ResMut<'w, T>
|
|
|
|
where
|
|
|
|
&'a mut T: IntoIterator,
|
|
|
|
{
|
|
|
|
type Item = <&'a mut T as IntoIterator>::Item;
|
|
|
|
type IntoIter = <&'a mut T as IntoIterator>::IntoIter;
|
|
|
|
|
|
|
|
fn into_iter(self) -> Self::IntoIter {
|
|
|
|
self.set_changed();
|
|
|
|
self.value.into_iter()
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2021-10-03 19:23:44 +00:00
|
|
|
change_detection_impl!(ResMut<'a, T>, T, Resource);
|
2022-10-10 17:06:31 +00:00
|
|
|
impl_methods!(ResMut<'a, T>, T, Resource);
|
2021-10-03 19:23:44 +00:00
|
|
|
impl_debug!(ResMut<'a, T>, Resource);
|
2021-05-30 19:29:31 +00:00
|
|
|
|
2022-07-25 16:11:29 +00:00
|
|
|
impl<'a, T: Resource> From<ResMut<'a, T>> for Mut<'a, T> {
|
|
|
|
/// Convert this `ResMut` into a `Mut`. This allows keeping the change-detection feature of `Mut`
|
|
|
|
/// while losing the specificity of `ResMut` for resources.
|
|
|
|
fn from(other: ResMut<'a, T>) -> Mut<'a, T> {
|
|
|
|
Mut {
|
|
|
|
value: other.value,
|
|
|
|
ticks: other.ticks,
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2021-06-09 19:02:00 +00:00
|
|
|
/// Unique borrow of a non-[`Send`] resource.
|
|
|
|
///
|
2021-09-17 18:00:29 +00:00
|
|
|
/// Only [`Send`] resources may be accessed with the [`ResMut`] [`SystemParam`](crate::system::SystemParam). In case that the
|
2021-06-09 19:02:00 +00:00
|
|
|
/// resource does not implement `Send`, this `SystemParam` wrapper can be used. This will instruct
|
|
|
|
/// the scheduler to instead run the system on the main thread so that it doesn't send the resource
|
|
|
|
/// over to another thread.
|
|
|
|
///
|
|
|
|
/// # Panics
|
|
|
|
///
|
|
|
|
/// Panics when used as a `SystemParameter` if the resource does not exist.
|
2021-06-26 19:29:38 +00:00
|
|
|
///
|
|
|
|
/// Use `Option<NonSendMut<T>>` instead if the resource might not always exist.
|
2022-09-09 21:26:36 +00:00
|
|
|
pub struct NonSendMut<'a, T: ?Sized + 'static> {
|
2021-06-09 19:02:00 +00:00
|
|
|
pub(crate) value: &'a mut T,
|
|
|
|
pub(crate) ticks: Ticks<'a>,
|
|
|
|
}
|
|
|
|
|
|
|
|
change_detection_impl!(NonSendMut<'a, T>, T,);
|
2022-10-10 17:06:31 +00:00
|
|
|
impl_methods!(NonSendMut<'a, T>, T,);
|
2021-06-09 19:02:00 +00:00
|
|
|
impl_debug!(NonSendMut<'a, T>,);
|
|
|
|
|
2022-08-04 22:09:51 +00:00
|
|
|
impl<'a, T: 'static> From<NonSendMut<'a, T>> for Mut<'a, T> {
|
2022-07-25 16:11:29 +00:00
|
|
|
/// Convert this `NonSendMut` into a `Mut`. This allows keeping the change-detection feature of `Mut`
|
|
|
|
/// while losing the specificity of `NonSendMut`.
|
|
|
|
fn from(other: NonSendMut<'a, T>) -> Mut<'a, T> {
|
|
|
|
Mut {
|
|
|
|
value: other.value,
|
|
|
|
ticks: other.ticks,
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2021-05-30 19:29:31 +00:00
|
|
|
/// Unique mutable borrow of an entity's component
|
2022-08-09 16:19:34 +00:00
|
|
|
pub struct Mut<'a, T: ?Sized> {
|
2021-05-30 19:29:31 +00:00
|
|
|
pub(crate) value: &'a mut T,
|
|
|
|
pub(crate) ticks: Ticks<'a>,
|
|
|
|
}
|
|
|
|
|
2022-10-24 21:01:08 +00:00
|
|
|
impl<'w, 'a, T> IntoIterator for &'a Mut<'w, T>
|
|
|
|
where
|
|
|
|
&'a T: IntoIterator,
|
|
|
|
{
|
|
|
|
type Item = <&'a T as IntoIterator>::Item;
|
|
|
|
type IntoIter = <&'a T as IntoIterator>::IntoIter;
|
|
|
|
|
|
|
|
fn into_iter(self) -> Self::IntoIter {
|
|
|
|
self.value.into_iter()
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
impl<'w, 'a, T> IntoIterator for &'a mut Mut<'w, T>
|
|
|
|
where
|
|
|
|
&'a mut T: IntoIterator,
|
|
|
|
{
|
|
|
|
type Item = <&'a mut T as IntoIterator>::Item;
|
|
|
|
type IntoIter = <&'a mut T as IntoIterator>::IntoIter;
|
|
|
|
|
|
|
|
fn into_iter(self) -> Self::IntoIter {
|
|
|
|
self.set_changed();
|
|
|
|
self.value.into_iter()
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2021-05-30 19:29:31 +00:00
|
|
|
change_detection_impl!(Mut<'a, T>, T,);
|
2022-10-10 17:06:31 +00:00
|
|
|
impl_methods!(Mut<'a, T>, T,);
|
2021-05-30 19:29:31 +00:00
|
|
|
impl_debug!(Mut<'a, T>,);
|
|
|
|
|
2022-05-30 15:32:47 +00:00
|
|
|
/// Unique mutable borrow of resources or an entity's component.
|
|
|
|
///
|
|
|
|
/// Similar to [`Mut`], but not generic over the component type, instead
|
|
|
|
/// exposing the raw pointer as a `*mut ()`.
|
|
|
|
///
|
|
|
|
/// Usually you don't need to use this and can instead use the APIs returning a
|
|
|
|
/// [`Mut`], but in situations where the types are not known at compile time
|
|
|
|
/// or are defined outside of rust this can be used.
|
|
|
|
pub struct MutUntyped<'a> {
|
|
|
|
pub(crate) value: PtrMut<'a>,
|
|
|
|
pub(crate) ticks: Ticks<'a>,
|
|
|
|
}
|
|
|
|
|
|
|
|
impl<'a> MutUntyped<'a> {
|
2022-12-27 16:05:16 +00:00
|
|
|
/// Returns the pointer to the value, marking it as changed.
|
2022-05-30 15:32:47 +00:00
|
|
|
///
|
2022-12-27 16:05:16 +00:00
|
|
|
/// In order to avoid marking the value as changed, you need to call [`bypass_change_detection`](DetectChanges::bypass_change_detection).
|
2022-09-09 16:26:52 +00:00
|
|
|
#[inline]
|
2022-12-27 16:05:16 +00:00
|
|
|
pub fn into_inner(mut self) -> PtrMut<'a> {
|
|
|
|
self.set_changed();
|
2022-05-30 15:32:47 +00:00
|
|
|
self.value
|
|
|
|
}
|
2022-12-27 16:05:16 +00:00
|
|
|
|
|
|
|
/// Returns a pointer to the value without taking ownership of this smart pointer, marking it as changed.
|
|
|
|
///
|
|
|
|
/// In order to avoid marking the value as changed, you need to call [`bypass_change_detection`](DetectChanges::bypass_change_detection).
|
|
|
|
#[inline]
|
|
|
|
pub fn as_mut(&mut self) -> PtrMut<'_> {
|
|
|
|
self.set_changed();
|
|
|
|
self.value.reborrow()
|
|
|
|
}
|
|
|
|
|
|
|
|
/// Returns an immutable pointer to the value without taking ownership.
|
|
|
|
#[inline]
|
|
|
|
pub fn as_ref(&self) -> Ptr<'_> {
|
|
|
|
self.value.as_ref()
|
|
|
|
}
|
2022-05-30 15:32:47 +00:00
|
|
|
}
|
|
|
|
|
2022-09-09 16:26:52 +00:00
|
|
|
impl<'a> DetectChanges for MutUntyped<'a> {
|
|
|
|
type Inner = PtrMut<'a>;
|
|
|
|
|
|
|
|
#[inline]
|
2022-05-30 15:32:47 +00:00
|
|
|
fn is_added(&self) -> bool {
|
|
|
|
self.ticks
|
2022-11-21 12:59:09 +00:00
|
|
|
.added
|
|
|
|
.is_older_than(self.ticks.last_change_tick, self.ticks.change_tick)
|
2022-05-30 15:32:47 +00:00
|
|
|
}
|
|
|
|
|
2022-09-09 16:26:52 +00:00
|
|
|
#[inline]
|
2022-05-30 15:32:47 +00:00
|
|
|
fn is_changed(&self) -> bool {
|
|
|
|
self.ticks
|
2022-11-21 12:59:09 +00:00
|
|
|
.changed
|
|
|
|
.is_older_than(self.ticks.last_change_tick, self.ticks.change_tick)
|
2022-05-30 15:32:47 +00:00
|
|
|
}
|
|
|
|
|
2022-09-09 16:26:52 +00:00
|
|
|
#[inline]
|
2022-05-30 15:32:47 +00:00
|
|
|
fn set_changed(&mut self) {
|
2022-11-21 12:59:09 +00:00
|
|
|
self.ticks.changed.set_changed(self.ticks.change_tick);
|
2022-05-30 15:32:47 +00:00
|
|
|
}
|
|
|
|
|
2022-09-09 16:26:52 +00:00
|
|
|
#[inline]
|
2022-05-30 15:32:47 +00:00
|
|
|
fn last_changed(&self) -> u32 {
|
|
|
|
self.ticks.last_change_tick
|
|
|
|
}
|
2022-09-09 16:26:52 +00:00
|
|
|
|
|
|
|
#[inline]
|
|
|
|
fn set_last_changed(&mut self, last_change_tick: u32) {
|
|
|
|
self.ticks.last_change_tick = last_change_tick;
|
|
|
|
}
|
|
|
|
|
|
|
|
#[inline]
|
|
|
|
fn bypass_change_detection(&mut self) -> &mut Self::Inner {
|
|
|
|
&mut self.value
|
|
|
|
}
|
2022-12-11 19:24:19 +00:00
|
|
|
|
|
|
|
#[inline]
|
|
|
|
fn set_if_neq<Target>(&mut self, value: Target)
|
|
|
|
where
|
|
|
|
Self: Deref<Target = Target> + DerefMut<Target = Target>,
|
|
|
|
Target: PartialEq,
|
|
|
|
{
|
|
|
|
// This dereference is immutable, so does not trigger change detection
|
|
|
|
if *<Self as Deref>::deref(self) != value {
|
|
|
|
// `DerefMut` usage triggers change detection
|
|
|
|
*<Self as DerefMut>::deref_mut(self) = value;
|
|
|
|
}
|
|
|
|
}
|
2022-05-30 15:32:47 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
impl std::fmt::Debug for MutUntyped<'_> {
|
|
|
|
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
|
|
|
|
f.debug_tuple("MutUntyped")
|
|
|
|
.field(&self.value.as_ptr())
|
|
|
|
.finish()
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
Make change lifespan deterministic and update docs (#3956)
## Objective
- ~~Make absurdly long-lived changes stay detectable for even longer (without leveling up to `u64`).~~
- Give all changes a consistent maximum lifespan.
- Improve code clarity.
## Solution
- ~~Increase the frequency of `check_tick` scans to increase the oldest reliably-detectable change.~~
(Deferred until we can benchmark the cost of a scan.)
- Ignore changes older than the maximum reliably-detectable age.
- General refactoring—name the constants, use them everywhere, and update the docs.
- Update test cases to check for the specified behavior.
## Related
This PR addresses (at least partially) the concerns raised in:
- #3071
- #3082 (and associated PR #3084)
## Background
- #1471
Given the minimum interval between `check_ticks` scans, `N`, the oldest reliably-detectable change is `u32::MAX - (2 * N - 1)` (or `MAX_CHANGE_AGE`). Reducing `N` from ~530 million (current value) to something like ~2 million would extend the lifetime of changes by a billion.
| minimum `check_ticks` interval | oldest reliably-detectable change | usable % of `u32::MAX` |
| --- | --- | --- |
| `u32::MAX / 8` (536,870,911) | `(u32::MAX / 4) * 3` | 75.0% |
| `2_000_000` | `u32::MAX - 3_999_999` | 99.9% |
Similarly, changes are still allowed to be between `MAX_CHANGE_AGE`-old and `u32::MAX`-old in the interim between `check_tick` scans. While we prevent their age from overflowing, the test to detect changes still compares raw values. This makes failure ultimately unreliable, since when ancient changes stop being detected varies depending on when the next scan occurs.
## Open Question
Currently, systems and system states are incorrectly initialized with their `last_change_tick` set to `0`, which doesn't handle wraparound correctly.
For consistent behavior, they should either be initialized to the world's `last_change_tick` (and detect no changes) or to `MAX_CHANGE_AGE` behind the world's current `change_tick` (and detect everything as a change). I've currently gone with the latter since that was closer to the existing behavior.
## Follow-up Work
(Edited: entire section)
We haven't actually profiled how long a `check_ticks` scan takes on a "large" `World` , so we don't know if it's safe to increase their frequency. However, we are currently relying on play sessions not lasting long enough to trigger a scan and apps not having enough entities/archetypes for it to be "expensive" (our assumption). That isn't a real solution. (Either scanning never costs enough to impact frame times or we provide an option to use `u64` change ticks. Nobody will accept random hiccups.)
To further extend the lifetime of changes, we actually only need to increment the world tick if a system has `Fetch: !ReadOnlySystemParamFetch`. The behavior will be identical because all writes are sequenced, but I'm not sure how to implement that in a way that the compiler can optimize the branch out.
Also, since having no false positives depends on a `check_ticks` scan running at least every `2 * N - 1` ticks, a `last_check_tick` should also be stored in the `World` so that any lull in system execution (like a command flush) could trigger a scan if needed. To be completely robust, all the systems initialized on the world should be scanned, not just those in the current stage.
2022-05-09 14:00:16 +00:00
|
|
|
#[cfg(test)]
|
|
|
|
mod tests {
|
Make `Resource` trait opt-in, requiring `#[derive(Resource)]` V2 (#5577)
*This PR description is an edited copy of #5007, written by @alice-i-cecile.*
# Objective
Follow-up to https://github.com/bevyengine/bevy/pull/2254. The `Resource` trait currently has a blanket implementation for all types that meet its bounds.
While ergonomic, this results in several drawbacks:
* it is possible to make confusing, silent mistakes such as inserting a function pointer (Foo) rather than a value (Foo::Bar) as a resource
* it is challenging to discover if a type is intended to be used as a resource
* we cannot later add customization options (see the [RFC](https://github.com/bevyengine/rfcs/blob/main/rfcs/27-derive-component.md) for the equivalent choice for Component).
* dependencies can use the same Rust type as a resource in invisibly conflicting ways
* raw Rust types used as resources cannot preserve privacy appropriately, as anyone able to access that type can read and write to internal values
* we cannot capture a definitive list of possible resources to display to users in an editor
## Notes to reviewers
* Review this commit-by-commit; there's effectively no back-tracking and there's a lot of churn in some of these commits.
*ira: My commits are not as well organized :')*
* I've relaxed the bound on Local to Send + Sync + 'static: I don't think these concerns apply there, so this can keep things simple. Storing e.g. a u32 in a Local is fine, because there's a variable name attached explaining what it does.
* I think this is a bad place for the Resource trait to live, but I've left it in place to make reviewing easier. IMO that's best tackled with https://github.com/bevyengine/bevy/issues/4981.
## Changelog
`Resource` is no longer automatically implemented for all matching types. Instead, use the new `#[derive(Resource)]` macro.
## Migration Guide
Add `#[derive(Resource)]` to all types you are using as a resource.
If you are using a third party type as a resource, wrap it in a tuple struct to bypass orphan rules. Consider deriving `Deref` and `DerefMut` to improve ergonomics.
`ClearColor` no longer implements `Component`. Using `ClearColor` as a component in 0.8 did nothing.
Use the `ClearColorConfig` in the `Camera3d` and `Camera2d` components instead.
Co-authored-by: Alice <alice.i.cecile@gmail.com>
Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
Co-authored-by: devil-ira <justthecooldude@gmail.com>
Co-authored-by: Carter Anderson <mcanders1@gmail.com>
2022-08-08 21:36:35 +00:00
|
|
|
use bevy_ecs_macros::Resource;
|
|
|
|
|
Make change lifespan deterministic and update docs (#3956)
## Objective
- ~~Make absurdly long-lived changes stay detectable for even longer (without leveling up to `u64`).~~
- Give all changes a consistent maximum lifespan.
- Improve code clarity.
## Solution
- ~~Increase the frequency of `check_tick` scans to increase the oldest reliably-detectable change.~~
(Deferred until we can benchmark the cost of a scan.)
- Ignore changes older than the maximum reliably-detectable age.
- General refactoring—name the constants, use them everywhere, and update the docs.
- Update test cases to check for the specified behavior.
## Related
This PR addresses (at least partially) the concerns raised in:
- #3071
- #3082 (and associated PR #3084)
## Background
- #1471
Given the minimum interval between `check_ticks` scans, `N`, the oldest reliably-detectable change is `u32::MAX - (2 * N - 1)` (or `MAX_CHANGE_AGE`). Reducing `N` from ~530 million (current value) to something like ~2 million would extend the lifetime of changes by a billion.
| minimum `check_ticks` interval | oldest reliably-detectable change | usable % of `u32::MAX` |
| --- | --- | --- |
| `u32::MAX / 8` (536,870,911) | `(u32::MAX / 4) * 3` | 75.0% |
| `2_000_000` | `u32::MAX - 3_999_999` | 99.9% |
Similarly, changes are still allowed to be between `MAX_CHANGE_AGE`-old and `u32::MAX`-old in the interim between `check_tick` scans. While we prevent their age from overflowing, the test to detect changes still compares raw values. This makes failure ultimately unreliable, since when ancient changes stop being detected varies depending on when the next scan occurs.
## Open Question
Currently, systems and system states are incorrectly initialized with their `last_change_tick` set to `0`, which doesn't handle wraparound correctly.
For consistent behavior, they should either be initialized to the world's `last_change_tick` (and detect no changes) or to `MAX_CHANGE_AGE` behind the world's current `change_tick` (and detect everything as a change). I've currently gone with the latter since that was closer to the existing behavior.
## Follow-up Work
(Edited: entire section)
We haven't actually profiled how long a `check_ticks` scan takes on a "large" `World` , so we don't know if it's safe to increase their frequency. However, we are currently relying on play sessions not lasting long enough to trigger a scan and apps not having enough entities/archetypes for it to be "expensive" (our assumption). That isn't a real solution. (Either scanning never costs enough to impact frame times or we provide an option to use `u64` change ticks. Nobody will accept random hiccups.)
To further extend the lifetime of changes, we actually only need to increment the world tick if a system has `Fetch: !ReadOnlySystemParamFetch`. The behavior will be identical because all writes are sequenced, but I'm not sure how to implement that in a way that the compiler can optimize the branch out.
Also, since having no false positives depends on a `check_ticks` scan running at least every `2 * N - 1` ticks, a `last_check_tick` should also be stored in the `World` so that any lull in system execution (like a command flush) could trigger a scan if needed. To be completely robust, all the systems initialized on the world should be scanned, not just those in the current stage.
2022-05-09 14:00:16 +00:00
|
|
|
use crate::{
|
|
|
|
self as bevy_ecs,
|
2022-11-21 12:59:09 +00:00
|
|
|
change_detection::{Mut, NonSendMut, ResMut, Ticks, CHECK_TICK_THRESHOLD, MAX_CHANGE_AGE},
|
|
|
|
component::{Component, ComponentTicks, Tick},
|
Make change lifespan deterministic and update docs (#3956)
## Objective
- ~~Make absurdly long-lived changes stay detectable for even longer (without leveling up to `u64`).~~
- Give all changes a consistent maximum lifespan.
- Improve code clarity.
## Solution
- ~~Increase the frequency of `check_tick` scans to increase the oldest reliably-detectable change.~~
(Deferred until we can benchmark the cost of a scan.)
- Ignore changes older than the maximum reliably-detectable age.
- General refactoring—name the constants, use them everywhere, and update the docs.
- Update test cases to check for the specified behavior.
## Related
This PR addresses (at least partially) the concerns raised in:
- #3071
- #3082 (and associated PR #3084)
## Background
- #1471
Given the minimum interval between `check_ticks` scans, `N`, the oldest reliably-detectable change is `u32::MAX - (2 * N - 1)` (or `MAX_CHANGE_AGE`). Reducing `N` from ~530 million (current value) to something like ~2 million would extend the lifetime of changes by a billion.
| minimum `check_ticks` interval | oldest reliably-detectable change | usable % of `u32::MAX` |
| --- | --- | --- |
| `u32::MAX / 8` (536,870,911) | `(u32::MAX / 4) * 3` | 75.0% |
| `2_000_000` | `u32::MAX - 3_999_999` | 99.9% |
Similarly, changes are still allowed to be between `MAX_CHANGE_AGE`-old and `u32::MAX`-old in the interim between `check_tick` scans. While we prevent their age from overflowing, the test to detect changes still compares raw values. This makes failure ultimately unreliable, since when ancient changes stop being detected varies depending on when the next scan occurs.
## Open Question
Currently, systems and system states are incorrectly initialized with their `last_change_tick` set to `0`, which doesn't handle wraparound correctly.
For consistent behavior, they should either be initialized to the world's `last_change_tick` (and detect no changes) or to `MAX_CHANGE_AGE` behind the world's current `change_tick` (and detect everything as a change). I've currently gone with the latter since that was closer to the existing behavior.
## Follow-up Work
(Edited: entire section)
We haven't actually profiled how long a `check_ticks` scan takes on a "large" `World` , so we don't know if it's safe to increase their frequency. However, we are currently relying on play sessions not lasting long enough to trigger a scan and apps not having enough entities/archetypes for it to be "expensive" (our assumption). That isn't a real solution. (Either scanning never costs enough to impact frame times or we provide an option to use `u64` change ticks. Nobody will accept random hiccups.)
To further extend the lifetime of changes, we actually only need to increment the world tick if a system has `Fetch: !ReadOnlySystemParamFetch`. The behavior will be identical because all writes are sequenced, but I'm not sure how to implement that in a way that the compiler can optimize the branch out.
Also, since having no false positives depends on a `check_ticks` scan running at least every `2 * N - 1` ticks, a `last_check_tick` should also be stored in the `World` so that any lull in system execution (like a command flush) could trigger a scan if needed. To be completely robust, all the systems initialized on the world should be scanned, not just those in the current stage.
2022-05-09 14:00:16 +00:00
|
|
|
query::ChangeTrackers,
|
|
|
|
system::{IntoSystem, Query, System},
|
|
|
|
world::World,
|
|
|
|
};
|
|
|
|
|
2022-12-11 19:24:19 +00:00
|
|
|
use super::DetectChanges;
|
|
|
|
|
|
|
|
#[derive(Component, PartialEq)]
|
Make change lifespan deterministic and update docs (#3956)
## Objective
- ~~Make absurdly long-lived changes stay detectable for even longer (without leveling up to `u64`).~~
- Give all changes a consistent maximum lifespan.
- Improve code clarity.
## Solution
- ~~Increase the frequency of `check_tick` scans to increase the oldest reliably-detectable change.~~
(Deferred until we can benchmark the cost of a scan.)
- Ignore changes older than the maximum reliably-detectable age.
- General refactoring—name the constants, use them everywhere, and update the docs.
- Update test cases to check for the specified behavior.
## Related
This PR addresses (at least partially) the concerns raised in:
- #3071
- #3082 (and associated PR #3084)
## Background
- #1471
Given the minimum interval between `check_ticks` scans, `N`, the oldest reliably-detectable change is `u32::MAX - (2 * N - 1)` (or `MAX_CHANGE_AGE`). Reducing `N` from ~530 million (current value) to something like ~2 million would extend the lifetime of changes by a billion.
| minimum `check_ticks` interval | oldest reliably-detectable change | usable % of `u32::MAX` |
| --- | --- | --- |
| `u32::MAX / 8` (536,870,911) | `(u32::MAX / 4) * 3` | 75.0% |
| `2_000_000` | `u32::MAX - 3_999_999` | 99.9% |
Similarly, changes are still allowed to be between `MAX_CHANGE_AGE`-old and `u32::MAX`-old in the interim between `check_tick` scans. While we prevent their age from overflowing, the test to detect changes still compares raw values. This makes failure ultimately unreliable, since when ancient changes stop being detected varies depending on when the next scan occurs.
## Open Question
Currently, systems and system states are incorrectly initialized with their `last_change_tick` set to `0`, which doesn't handle wraparound correctly.
For consistent behavior, they should either be initialized to the world's `last_change_tick` (and detect no changes) or to `MAX_CHANGE_AGE` behind the world's current `change_tick` (and detect everything as a change). I've currently gone with the latter since that was closer to the existing behavior.
## Follow-up Work
(Edited: entire section)
We haven't actually profiled how long a `check_ticks` scan takes on a "large" `World` , so we don't know if it's safe to increase their frequency. However, we are currently relying on play sessions not lasting long enough to trigger a scan and apps not having enough entities/archetypes for it to be "expensive" (our assumption). That isn't a real solution. (Either scanning never costs enough to impact frame times or we provide an option to use `u64` change ticks. Nobody will accept random hiccups.)
To further extend the lifetime of changes, we actually only need to increment the world tick if a system has `Fetch: !ReadOnlySystemParamFetch`. The behavior will be identical because all writes are sequenced, but I'm not sure how to implement that in a way that the compiler can optimize the branch out.
Also, since having no false positives depends on a `check_ticks` scan running at least every `2 * N - 1` ticks, a `last_check_tick` should also be stored in the `World` so that any lull in system execution (like a command flush) could trigger a scan if needed. To be completely robust, all the systems initialized on the world should be scanned, not just those in the current stage.
2022-05-09 14:00:16 +00:00
|
|
|
struct C;
|
|
|
|
|
Make `Resource` trait opt-in, requiring `#[derive(Resource)]` V2 (#5577)
*This PR description is an edited copy of #5007, written by @alice-i-cecile.*
# Objective
Follow-up to https://github.com/bevyengine/bevy/pull/2254. The `Resource` trait currently has a blanket implementation for all types that meet its bounds.
While ergonomic, this results in several drawbacks:
* it is possible to make confusing, silent mistakes such as inserting a function pointer (Foo) rather than a value (Foo::Bar) as a resource
* it is challenging to discover if a type is intended to be used as a resource
* we cannot later add customization options (see the [RFC](https://github.com/bevyengine/rfcs/blob/main/rfcs/27-derive-component.md) for the equivalent choice for Component).
* dependencies can use the same Rust type as a resource in invisibly conflicting ways
* raw Rust types used as resources cannot preserve privacy appropriately, as anyone able to access that type can read and write to internal values
* we cannot capture a definitive list of possible resources to display to users in an editor
## Notes to reviewers
* Review this commit-by-commit; there's effectively no back-tracking and there's a lot of churn in some of these commits.
*ira: My commits are not as well organized :')*
* I've relaxed the bound on Local to Send + Sync + 'static: I don't think these concerns apply there, so this can keep things simple. Storing e.g. a u32 in a Local is fine, because there's a variable name attached explaining what it does.
* I think this is a bad place for the Resource trait to live, but I've left it in place to make reviewing easier. IMO that's best tackled with https://github.com/bevyengine/bevy/issues/4981.
## Changelog
`Resource` is no longer automatically implemented for all matching types. Instead, use the new `#[derive(Resource)]` macro.
## Migration Guide
Add `#[derive(Resource)]` to all types you are using as a resource.
If you are using a third party type as a resource, wrap it in a tuple struct to bypass orphan rules. Consider deriving `Deref` and `DerefMut` to improve ergonomics.
`ClearColor` no longer implements `Component`. Using `ClearColor` as a component in 0.8 did nothing.
Use the `ClearColorConfig` in the `Camera3d` and `Camera2d` components instead.
Co-authored-by: Alice <alice.i.cecile@gmail.com>
Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
Co-authored-by: devil-ira <justthecooldude@gmail.com>
Co-authored-by: Carter Anderson <mcanders1@gmail.com>
2022-08-08 21:36:35 +00:00
|
|
|
#[derive(Resource)]
|
|
|
|
struct R;
|
2022-07-25 16:11:29 +00:00
|
|
|
|
2022-12-11 19:24:19 +00:00
|
|
|
#[derive(Resource, PartialEq)]
|
|
|
|
struct R2(u8);
|
|
|
|
|
Make change lifespan deterministic and update docs (#3956)
## Objective
- ~~Make absurdly long-lived changes stay detectable for even longer (without leveling up to `u64`).~~
- Give all changes a consistent maximum lifespan.
- Improve code clarity.
## Solution
- ~~Increase the frequency of `check_tick` scans to increase the oldest reliably-detectable change.~~
(Deferred until we can benchmark the cost of a scan.)
- Ignore changes older than the maximum reliably-detectable age.
- General refactoring—name the constants, use them everywhere, and update the docs.
- Update test cases to check for the specified behavior.
## Related
This PR addresses (at least partially) the concerns raised in:
- #3071
- #3082 (and associated PR #3084)
## Background
- #1471
Given the minimum interval between `check_ticks` scans, `N`, the oldest reliably-detectable change is `u32::MAX - (2 * N - 1)` (or `MAX_CHANGE_AGE`). Reducing `N` from ~530 million (current value) to something like ~2 million would extend the lifetime of changes by a billion.
| minimum `check_ticks` interval | oldest reliably-detectable change | usable % of `u32::MAX` |
| --- | --- | --- |
| `u32::MAX / 8` (536,870,911) | `(u32::MAX / 4) * 3` | 75.0% |
| `2_000_000` | `u32::MAX - 3_999_999` | 99.9% |
Similarly, changes are still allowed to be between `MAX_CHANGE_AGE`-old and `u32::MAX`-old in the interim between `check_tick` scans. While we prevent their age from overflowing, the test to detect changes still compares raw values. This makes failure ultimately unreliable, since when ancient changes stop being detected varies depending on when the next scan occurs.
## Open Question
Currently, systems and system states are incorrectly initialized with their `last_change_tick` set to `0`, which doesn't handle wraparound correctly.
For consistent behavior, they should either be initialized to the world's `last_change_tick` (and detect no changes) or to `MAX_CHANGE_AGE` behind the world's current `change_tick` (and detect everything as a change). I've currently gone with the latter since that was closer to the existing behavior.
## Follow-up Work
(Edited: entire section)
We haven't actually profiled how long a `check_ticks` scan takes on a "large" `World` , so we don't know if it's safe to increase their frequency. However, we are currently relying on play sessions not lasting long enough to trigger a scan and apps not having enough entities/archetypes for it to be "expensive" (our assumption). That isn't a real solution. (Either scanning never costs enough to impact frame times or we provide an option to use `u64` change ticks. Nobody will accept random hiccups.)
To further extend the lifetime of changes, we actually only need to increment the world tick if a system has `Fetch: !ReadOnlySystemParamFetch`. The behavior will be identical because all writes are sequenced, but I'm not sure how to implement that in a way that the compiler can optimize the branch out.
Also, since having no false positives depends on a `check_ticks` scan running at least every `2 * N - 1` ticks, a `last_check_tick` should also be stored in the `World` so that any lull in system execution (like a command flush) could trigger a scan if needed. To be completely robust, all the systems initialized on the world should be scanned, not just those in the current stage.
2022-05-09 14:00:16 +00:00
|
|
|
#[test]
|
|
|
|
fn change_expiration() {
|
|
|
|
fn change_detected(query: Query<ChangeTrackers<C>>) -> bool {
|
|
|
|
query.single().is_changed()
|
|
|
|
}
|
|
|
|
|
|
|
|
fn change_expired(query: Query<ChangeTrackers<C>>) -> bool {
|
|
|
|
query.single().is_changed()
|
|
|
|
}
|
|
|
|
|
|
|
|
let mut world = World::new();
|
|
|
|
|
|
|
|
// component added: 1, changed: 1
|
Spawn now takes a Bundle (#6054)
# Objective
Now that we can consolidate Bundles and Components under a single insert (thanks to #2975 and #6039), almost 100% of world spawns now look like `world.spawn().insert((Some, Tuple, Here))`. Spawning an entity without any components is an extremely uncommon pattern, so it makes sense to give spawn the "first class" ergonomic api. This consolidated api should be made consistent across all spawn apis (such as World and Commands).
## Solution
All `spawn` apis (`World::spawn`, `Commands:;spawn`, `ChildBuilder::spawn`, and `WorldChildBuilder::spawn`) now accept a bundle as input:
```rust
// before:
commands
.spawn()
.insert((A, B, C));
world
.spawn()
.insert((A, B, C);
// after
commands.spawn((A, B, C));
world.spawn((A, B, C));
```
All existing instances of `spawn_bundle` have been deprecated in favor of the new `spawn` api. A new `spawn_empty` has been added, replacing the old `spawn` api.
By allowing `world.spawn(some_bundle)` to replace `world.spawn().insert(some_bundle)`, this opened the door to removing the initial entity allocation in the "empty" archetype / table done in `spawn()` (and subsequent move to the actual archetype in `.insert(some_bundle)`).
This improves spawn performance by over 10%:
![image](https://user-images.githubusercontent.com/2694663/191627587-4ab2f949-4ccd-4231-80eb-80dd4d9ad6b9.png)
To take this measurement, I added a new `world_spawn` benchmark.
Unfortunately, optimizing `Commands::spawn` is slightly less trivial, as Commands expose the Entity id of spawned entities prior to actually spawning. Doing the optimization would (naively) require assurances that the `spawn(some_bundle)` command is applied before all other commands involving the entity (which would not necessarily be true, if memory serves). Optimizing `Commands::spawn` this way does feel possible, but it will require careful thought (and maybe some additional checks), which deserves its own PR. For now, it has the same performance characteristics of the current `Commands::spawn_bundle` on main.
**Note that 99% of this PR is simple renames and refactors. The only code that needs careful scrutiny is the new `World::spawn()` impl, which is relatively straightforward, but it has some new unsafe code (which re-uses battle tested BundlerSpawner code path).**
---
## Changelog
- All `spawn` apis (`World::spawn`, `Commands:;spawn`, `ChildBuilder::spawn`, and `WorldChildBuilder::spawn`) now accept a bundle as input
- All instances of `spawn_bundle` have been deprecated in favor of the new `spawn` api
- World and Commands now have `spawn_empty()`, which is equivalent to the old `spawn()` behavior.
## Migration Guide
```rust
// Old (0.8):
commands
.spawn()
.insert_bundle((A, B, C));
// New (0.9)
commands.spawn((A, B, C));
// Old (0.8):
commands.spawn_bundle((A, B, C));
// New (0.9)
commands.spawn((A, B, C));
// Old (0.8):
let entity = commands.spawn().id();
// New (0.9)
let entity = commands.spawn_empty().id();
// Old (0.8)
let entity = world.spawn().id();
// New (0.9)
let entity = world.spawn_empty();
```
2022-09-23 19:55:54 +00:00
|
|
|
world.spawn(C);
|
Make change lifespan deterministic and update docs (#3956)
## Objective
- ~~Make absurdly long-lived changes stay detectable for even longer (without leveling up to `u64`).~~
- Give all changes a consistent maximum lifespan.
- Improve code clarity.
## Solution
- ~~Increase the frequency of `check_tick` scans to increase the oldest reliably-detectable change.~~
(Deferred until we can benchmark the cost of a scan.)
- Ignore changes older than the maximum reliably-detectable age.
- General refactoring—name the constants, use them everywhere, and update the docs.
- Update test cases to check for the specified behavior.
## Related
This PR addresses (at least partially) the concerns raised in:
- #3071
- #3082 (and associated PR #3084)
## Background
- #1471
Given the minimum interval between `check_ticks` scans, `N`, the oldest reliably-detectable change is `u32::MAX - (2 * N - 1)` (or `MAX_CHANGE_AGE`). Reducing `N` from ~530 million (current value) to something like ~2 million would extend the lifetime of changes by a billion.
| minimum `check_ticks` interval | oldest reliably-detectable change | usable % of `u32::MAX` |
| --- | --- | --- |
| `u32::MAX / 8` (536,870,911) | `(u32::MAX / 4) * 3` | 75.0% |
| `2_000_000` | `u32::MAX - 3_999_999` | 99.9% |
Similarly, changes are still allowed to be between `MAX_CHANGE_AGE`-old and `u32::MAX`-old in the interim between `check_tick` scans. While we prevent their age from overflowing, the test to detect changes still compares raw values. This makes failure ultimately unreliable, since when ancient changes stop being detected varies depending on when the next scan occurs.
## Open Question
Currently, systems and system states are incorrectly initialized with their `last_change_tick` set to `0`, which doesn't handle wraparound correctly.
For consistent behavior, they should either be initialized to the world's `last_change_tick` (and detect no changes) or to `MAX_CHANGE_AGE` behind the world's current `change_tick` (and detect everything as a change). I've currently gone with the latter since that was closer to the existing behavior.
## Follow-up Work
(Edited: entire section)
We haven't actually profiled how long a `check_ticks` scan takes on a "large" `World` , so we don't know if it's safe to increase their frequency. However, we are currently relying on play sessions not lasting long enough to trigger a scan and apps not having enough entities/archetypes for it to be "expensive" (our assumption). That isn't a real solution. (Either scanning never costs enough to impact frame times or we provide an option to use `u64` change ticks. Nobody will accept random hiccups.)
To further extend the lifetime of changes, we actually only need to increment the world tick if a system has `Fetch: !ReadOnlySystemParamFetch`. The behavior will be identical because all writes are sequenced, but I'm not sure how to implement that in a way that the compiler can optimize the branch out.
Also, since having no false positives depends on a `check_ticks` scan running at least every `2 * N - 1` ticks, a `last_check_tick` should also be stored in the `World` so that any lull in system execution (like a command flush) could trigger a scan if needed. To be completely robust, all the systems initialized on the world should be scanned, not just those in the current stage.
2022-05-09 14:00:16 +00:00
|
|
|
|
|
|
|
let mut change_detected_system = IntoSystem::into_system(change_detected);
|
|
|
|
let mut change_expired_system = IntoSystem::into_system(change_expired);
|
|
|
|
change_detected_system.initialize(&mut world);
|
|
|
|
change_expired_system.initialize(&mut world);
|
|
|
|
|
|
|
|
// world: 1, system last ran: 0, component changed: 1
|
|
|
|
// The spawn will be detected since it happened after the system "last ran".
|
|
|
|
assert!(change_detected_system.run((), &mut world));
|
|
|
|
|
|
|
|
// world: 1 + MAX_CHANGE_AGE
|
|
|
|
let change_tick = world.change_tick.get_mut();
|
|
|
|
*change_tick = change_tick.wrapping_add(MAX_CHANGE_AGE);
|
|
|
|
|
|
|
|
// Both the system and component appeared `MAX_CHANGE_AGE` ticks ago.
|
|
|
|
// Since we clamp things to `MAX_CHANGE_AGE` for determinism,
|
|
|
|
// `ComponentTicks::is_changed` will now see `MAX_CHANGE_AGE > MAX_CHANGE_AGE`
|
|
|
|
// and return `false`.
|
|
|
|
assert!(!change_expired_system.run((), &mut world));
|
|
|
|
}
|
|
|
|
|
|
|
|
#[test]
|
|
|
|
fn change_tick_wraparound() {
|
|
|
|
fn change_detected(query: Query<ChangeTrackers<C>>) -> bool {
|
|
|
|
query.single().is_changed()
|
|
|
|
}
|
|
|
|
|
|
|
|
let mut world = World::new();
|
|
|
|
world.last_change_tick = u32::MAX;
|
|
|
|
*world.change_tick.get_mut() = 0;
|
|
|
|
|
|
|
|
// component added: 0, changed: 0
|
Spawn now takes a Bundle (#6054)
# Objective
Now that we can consolidate Bundles and Components under a single insert (thanks to #2975 and #6039), almost 100% of world spawns now look like `world.spawn().insert((Some, Tuple, Here))`. Spawning an entity without any components is an extremely uncommon pattern, so it makes sense to give spawn the "first class" ergonomic api. This consolidated api should be made consistent across all spawn apis (such as World and Commands).
## Solution
All `spawn` apis (`World::spawn`, `Commands:;spawn`, `ChildBuilder::spawn`, and `WorldChildBuilder::spawn`) now accept a bundle as input:
```rust
// before:
commands
.spawn()
.insert((A, B, C));
world
.spawn()
.insert((A, B, C);
// after
commands.spawn((A, B, C));
world.spawn((A, B, C));
```
All existing instances of `spawn_bundle` have been deprecated in favor of the new `spawn` api. A new `spawn_empty` has been added, replacing the old `spawn` api.
By allowing `world.spawn(some_bundle)` to replace `world.spawn().insert(some_bundle)`, this opened the door to removing the initial entity allocation in the "empty" archetype / table done in `spawn()` (and subsequent move to the actual archetype in `.insert(some_bundle)`).
This improves spawn performance by over 10%:
![image](https://user-images.githubusercontent.com/2694663/191627587-4ab2f949-4ccd-4231-80eb-80dd4d9ad6b9.png)
To take this measurement, I added a new `world_spawn` benchmark.
Unfortunately, optimizing `Commands::spawn` is slightly less trivial, as Commands expose the Entity id of spawned entities prior to actually spawning. Doing the optimization would (naively) require assurances that the `spawn(some_bundle)` command is applied before all other commands involving the entity (which would not necessarily be true, if memory serves). Optimizing `Commands::spawn` this way does feel possible, but it will require careful thought (and maybe some additional checks), which deserves its own PR. For now, it has the same performance characteristics of the current `Commands::spawn_bundle` on main.
**Note that 99% of this PR is simple renames and refactors. The only code that needs careful scrutiny is the new `World::spawn()` impl, which is relatively straightforward, but it has some new unsafe code (which re-uses battle tested BundlerSpawner code path).**
---
## Changelog
- All `spawn` apis (`World::spawn`, `Commands:;spawn`, `ChildBuilder::spawn`, and `WorldChildBuilder::spawn`) now accept a bundle as input
- All instances of `spawn_bundle` have been deprecated in favor of the new `spawn` api
- World and Commands now have `spawn_empty()`, which is equivalent to the old `spawn()` behavior.
## Migration Guide
```rust
// Old (0.8):
commands
.spawn()
.insert_bundle((A, B, C));
// New (0.9)
commands.spawn((A, B, C));
// Old (0.8):
commands.spawn_bundle((A, B, C));
// New (0.9)
commands.spawn((A, B, C));
// Old (0.8):
let entity = commands.spawn().id();
// New (0.9)
let entity = commands.spawn_empty().id();
// Old (0.8)
let entity = world.spawn().id();
// New (0.9)
let entity = world.spawn_empty();
```
2022-09-23 19:55:54 +00:00
|
|
|
world.spawn(C);
|
Make change lifespan deterministic and update docs (#3956)
## Objective
- ~~Make absurdly long-lived changes stay detectable for even longer (without leveling up to `u64`).~~
- Give all changes a consistent maximum lifespan.
- Improve code clarity.
## Solution
- ~~Increase the frequency of `check_tick` scans to increase the oldest reliably-detectable change.~~
(Deferred until we can benchmark the cost of a scan.)
- Ignore changes older than the maximum reliably-detectable age.
- General refactoring—name the constants, use them everywhere, and update the docs.
- Update test cases to check for the specified behavior.
## Related
This PR addresses (at least partially) the concerns raised in:
- #3071
- #3082 (and associated PR #3084)
## Background
- #1471
Given the minimum interval between `check_ticks` scans, `N`, the oldest reliably-detectable change is `u32::MAX - (2 * N - 1)` (or `MAX_CHANGE_AGE`). Reducing `N` from ~530 million (current value) to something like ~2 million would extend the lifetime of changes by a billion.
| minimum `check_ticks` interval | oldest reliably-detectable change | usable % of `u32::MAX` |
| --- | --- | --- |
| `u32::MAX / 8` (536,870,911) | `(u32::MAX / 4) * 3` | 75.0% |
| `2_000_000` | `u32::MAX - 3_999_999` | 99.9% |
Similarly, changes are still allowed to be between `MAX_CHANGE_AGE`-old and `u32::MAX`-old in the interim between `check_tick` scans. While we prevent their age from overflowing, the test to detect changes still compares raw values. This makes failure ultimately unreliable, since when ancient changes stop being detected varies depending on when the next scan occurs.
## Open Question
Currently, systems and system states are incorrectly initialized with their `last_change_tick` set to `0`, which doesn't handle wraparound correctly.
For consistent behavior, they should either be initialized to the world's `last_change_tick` (and detect no changes) or to `MAX_CHANGE_AGE` behind the world's current `change_tick` (and detect everything as a change). I've currently gone with the latter since that was closer to the existing behavior.
## Follow-up Work
(Edited: entire section)
We haven't actually profiled how long a `check_ticks` scan takes on a "large" `World` , so we don't know if it's safe to increase their frequency. However, we are currently relying on play sessions not lasting long enough to trigger a scan and apps not having enough entities/archetypes for it to be "expensive" (our assumption). That isn't a real solution. (Either scanning never costs enough to impact frame times or we provide an option to use `u64` change ticks. Nobody will accept random hiccups.)
To further extend the lifetime of changes, we actually only need to increment the world tick if a system has `Fetch: !ReadOnlySystemParamFetch`. The behavior will be identical because all writes are sequenced, but I'm not sure how to implement that in a way that the compiler can optimize the branch out.
Also, since having no false positives depends on a `check_ticks` scan running at least every `2 * N - 1` ticks, a `last_check_tick` should also be stored in the `World` so that any lull in system execution (like a command flush) could trigger a scan if needed. To be completely robust, all the systems initialized on the world should be scanned, not just those in the current stage.
2022-05-09 14:00:16 +00:00
|
|
|
|
|
|
|
// system last ran: u32::MAX
|
|
|
|
let mut change_detected_system = IntoSystem::into_system(change_detected);
|
|
|
|
change_detected_system.initialize(&mut world);
|
|
|
|
|
|
|
|
// Since the world is always ahead, as long as changes can't get older than `u32::MAX` (which we ensure),
|
|
|
|
// the wrapping difference will always be positive, so wraparound doesn't matter.
|
|
|
|
assert!(change_detected_system.run((), &mut world));
|
|
|
|
}
|
|
|
|
|
|
|
|
#[test]
|
|
|
|
fn change_tick_scan() {
|
|
|
|
let mut world = World::new();
|
|
|
|
|
|
|
|
// component added: 1, changed: 1
|
Spawn now takes a Bundle (#6054)
# Objective
Now that we can consolidate Bundles and Components under a single insert (thanks to #2975 and #6039), almost 100% of world spawns now look like `world.spawn().insert((Some, Tuple, Here))`. Spawning an entity without any components is an extremely uncommon pattern, so it makes sense to give spawn the "first class" ergonomic api. This consolidated api should be made consistent across all spawn apis (such as World and Commands).
## Solution
All `spawn` apis (`World::spawn`, `Commands:;spawn`, `ChildBuilder::spawn`, and `WorldChildBuilder::spawn`) now accept a bundle as input:
```rust
// before:
commands
.spawn()
.insert((A, B, C));
world
.spawn()
.insert((A, B, C);
// after
commands.spawn((A, B, C));
world.spawn((A, B, C));
```
All existing instances of `spawn_bundle` have been deprecated in favor of the new `spawn` api. A new `spawn_empty` has been added, replacing the old `spawn` api.
By allowing `world.spawn(some_bundle)` to replace `world.spawn().insert(some_bundle)`, this opened the door to removing the initial entity allocation in the "empty" archetype / table done in `spawn()` (and subsequent move to the actual archetype in `.insert(some_bundle)`).
This improves spawn performance by over 10%:
![image](https://user-images.githubusercontent.com/2694663/191627587-4ab2f949-4ccd-4231-80eb-80dd4d9ad6b9.png)
To take this measurement, I added a new `world_spawn` benchmark.
Unfortunately, optimizing `Commands::spawn` is slightly less trivial, as Commands expose the Entity id of spawned entities prior to actually spawning. Doing the optimization would (naively) require assurances that the `spawn(some_bundle)` command is applied before all other commands involving the entity (which would not necessarily be true, if memory serves). Optimizing `Commands::spawn` this way does feel possible, but it will require careful thought (and maybe some additional checks), which deserves its own PR. For now, it has the same performance characteristics of the current `Commands::spawn_bundle` on main.
**Note that 99% of this PR is simple renames and refactors. The only code that needs careful scrutiny is the new `World::spawn()` impl, which is relatively straightforward, but it has some new unsafe code (which re-uses battle tested BundlerSpawner code path).**
---
## Changelog
- All `spawn` apis (`World::spawn`, `Commands:;spawn`, `ChildBuilder::spawn`, and `WorldChildBuilder::spawn`) now accept a bundle as input
- All instances of `spawn_bundle` have been deprecated in favor of the new `spawn` api
- World and Commands now have `spawn_empty()`, which is equivalent to the old `spawn()` behavior.
## Migration Guide
```rust
// Old (0.8):
commands
.spawn()
.insert_bundle((A, B, C));
// New (0.9)
commands.spawn((A, B, C));
// Old (0.8):
commands.spawn_bundle((A, B, C));
// New (0.9)
commands.spawn((A, B, C));
// Old (0.8):
let entity = commands.spawn().id();
// New (0.9)
let entity = commands.spawn_empty().id();
// Old (0.8)
let entity = world.spawn().id();
// New (0.9)
let entity = world.spawn_empty();
```
2022-09-23 19:55:54 +00:00
|
|
|
world.spawn(C);
|
Make change lifespan deterministic and update docs (#3956)
## Objective
- ~~Make absurdly long-lived changes stay detectable for even longer (without leveling up to `u64`).~~
- Give all changes a consistent maximum lifespan.
- Improve code clarity.
## Solution
- ~~Increase the frequency of `check_tick` scans to increase the oldest reliably-detectable change.~~
(Deferred until we can benchmark the cost of a scan.)
- Ignore changes older than the maximum reliably-detectable age.
- General refactoring—name the constants, use them everywhere, and update the docs.
- Update test cases to check for the specified behavior.
## Related
This PR addresses (at least partially) the concerns raised in:
- #3071
- #3082 (and associated PR #3084)
## Background
- #1471
Given the minimum interval between `check_ticks` scans, `N`, the oldest reliably-detectable change is `u32::MAX - (2 * N - 1)` (or `MAX_CHANGE_AGE`). Reducing `N` from ~530 million (current value) to something like ~2 million would extend the lifetime of changes by a billion.
| minimum `check_ticks` interval | oldest reliably-detectable change | usable % of `u32::MAX` |
| --- | --- | --- |
| `u32::MAX / 8` (536,870,911) | `(u32::MAX / 4) * 3` | 75.0% |
| `2_000_000` | `u32::MAX - 3_999_999` | 99.9% |
Similarly, changes are still allowed to be between `MAX_CHANGE_AGE`-old and `u32::MAX`-old in the interim between `check_tick` scans. While we prevent their age from overflowing, the test to detect changes still compares raw values. This makes failure ultimately unreliable, since when ancient changes stop being detected varies depending on when the next scan occurs.
## Open Question
Currently, systems and system states are incorrectly initialized with their `last_change_tick` set to `0`, which doesn't handle wraparound correctly.
For consistent behavior, they should either be initialized to the world's `last_change_tick` (and detect no changes) or to `MAX_CHANGE_AGE` behind the world's current `change_tick` (and detect everything as a change). I've currently gone with the latter since that was closer to the existing behavior.
## Follow-up Work
(Edited: entire section)
We haven't actually profiled how long a `check_ticks` scan takes on a "large" `World` , so we don't know if it's safe to increase their frequency. However, we are currently relying on play sessions not lasting long enough to trigger a scan and apps not having enough entities/archetypes for it to be "expensive" (our assumption). That isn't a real solution. (Either scanning never costs enough to impact frame times or we provide an option to use `u64` change ticks. Nobody will accept random hiccups.)
To further extend the lifetime of changes, we actually only need to increment the world tick if a system has `Fetch: !ReadOnlySystemParamFetch`. The behavior will be identical because all writes are sequenced, but I'm not sure how to implement that in a way that the compiler can optimize the branch out.
Also, since having no false positives depends on a `check_ticks` scan running at least every `2 * N - 1` ticks, a `last_check_tick` should also be stored in the `World` so that any lull in system execution (like a command flush) could trigger a scan if needed. To be completely robust, all the systems initialized on the world should be scanned, not just those in the current stage.
2022-05-09 14:00:16 +00:00
|
|
|
|
|
|
|
// a bunch of stuff happens, the component is now older than `MAX_CHANGE_AGE`
|
|
|
|
*world.change_tick.get_mut() += MAX_CHANGE_AGE + CHECK_TICK_THRESHOLD;
|
|
|
|
let change_tick = world.change_tick();
|
|
|
|
|
|
|
|
let mut query = world.query::<ChangeTrackers<C>>();
|
|
|
|
for tracker in query.iter(&world) {
|
2022-11-21 12:59:09 +00:00
|
|
|
let ticks_since_insert = change_tick.wrapping_sub(tracker.component_ticks.added.tick);
|
|
|
|
let ticks_since_change = change_tick.wrapping_sub(tracker.component_ticks.changed.tick);
|
Make change lifespan deterministic and update docs (#3956)
## Objective
- ~~Make absurdly long-lived changes stay detectable for even longer (without leveling up to `u64`).~~
- Give all changes a consistent maximum lifespan.
- Improve code clarity.
## Solution
- ~~Increase the frequency of `check_tick` scans to increase the oldest reliably-detectable change.~~
(Deferred until we can benchmark the cost of a scan.)
- Ignore changes older than the maximum reliably-detectable age.
- General refactoring—name the constants, use them everywhere, and update the docs.
- Update test cases to check for the specified behavior.
## Related
This PR addresses (at least partially) the concerns raised in:
- #3071
- #3082 (and associated PR #3084)
## Background
- #1471
Given the minimum interval between `check_ticks` scans, `N`, the oldest reliably-detectable change is `u32::MAX - (2 * N - 1)` (or `MAX_CHANGE_AGE`). Reducing `N` from ~530 million (current value) to something like ~2 million would extend the lifetime of changes by a billion.
| minimum `check_ticks` interval | oldest reliably-detectable change | usable % of `u32::MAX` |
| --- | --- | --- |
| `u32::MAX / 8` (536,870,911) | `(u32::MAX / 4) * 3` | 75.0% |
| `2_000_000` | `u32::MAX - 3_999_999` | 99.9% |
Similarly, changes are still allowed to be between `MAX_CHANGE_AGE`-old and `u32::MAX`-old in the interim between `check_tick` scans. While we prevent their age from overflowing, the test to detect changes still compares raw values. This makes failure ultimately unreliable, since when ancient changes stop being detected varies depending on when the next scan occurs.
## Open Question
Currently, systems and system states are incorrectly initialized with their `last_change_tick` set to `0`, which doesn't handle wraparound correctly.
For consistent behavior, they should either be initialized to the world's `last_change_tick` (and detect no changes) or to `MAX_CHANGE_AGE` behind the world's current `change_tick` (and detect everything as a change). I've currently gone with the latter since that was closer to the existing behavior.
## Follow-up Work
(Edited: entire section)
We haven't actually profiled how long a `check_ticks` scan takes on a "large" `World` , so we don't know if it's safe to increase their frequency. However, we are currently relying on play sessions not lasting long enough to trigger a scan and apps not having enough entities/archetypes for it to be "expensive" (our assumption). That isn't a real solution. (Either scanning never costs enough to impact frame times or we provide an option to use `u64` change ticks. Nobody will accept random hiccups.)
To further extend the lifetime of changes, we actually only need to increment the world tick if a system has `Fetch: !ReadOnlySystemParamFetch`. The behavior will be identical because all writes are sequenced, but I'm not sure how to implement that in a way that the compiler can optimize the branch out.
Also, since having no false positives depends on a `check_ticks` scan running at least every `2 * N - 1` ticks, a `last_check_tick` should also be stored in the `World` so that any lull in system execution (like a command flush) could trigger a scan if needed. To be completely robust, all the systems initialized on the world should be scanned, not just those in the current stage.
2022-05-09 14:00:16 +00:00
|
|
|
assert!(ticks_since_insert > MAX_CHANGE_AGE);
|
|
|
|
assert!(ticks_since_change > MAX_CHANGE_AGE);
|
|
|
|
}
|
|
|
|
|
|
|
|
// scan change ticks and clamp those at risk of overflow
|
|
|
|
world.check_change_ticks();
|
|
|
|
|
|
|
|
for tracker in query.iter(&world) {
|
2022-11-21 12:59:09 +00:00
|
|
|
let ticks_since_insert = change_tick.wrapping_sub(tracker.component_ticks.added.tick);
|
|
|
|
let ticks_since_change = change_tick.wrapping_sub(tracker.component_ticks.changed.tick);
|
Make change lifespan deterministic and update docs (#3956)
## Objective
- ~~Make absurdly long-lived changes stay detectable for even longer (without leveling up to `u64`).~~
- Give all changes a consistent maximum lifespan.
- Improve code clarity.
## Solution
- ~~Increase the frequency of `check_tick` scans to increase the oldest reliably-detectable change.~~
(Deferred until we can benchmark the cost of a scan.)
- Ignore changes older than the maximum reliably-detectable age.
- General refactoring—name the constants, use them everywhere, and update the docs.
- Update test cases to check for the specified behavior.
## Related
This PR addresses (at least partially) the concerns raised in:
- #3071
- #3082 (and associated PR #3084)
## Background
- #1471
Given the minimum interval between `check_ticks` scans, `N`, the oldest reliably-detectable change is `u32::MAX - (2 * N - 1)` (or `MAX_CHANGE_AGE`). Reducing `N` from ~530 million (current value) to something like ~2 million would extend the lifetime of changes by a billion.
| minimum `check_ticks` interval | oldest reliably-detectable change | usable % of `u32::MAX` |
| --- | --- | --- |
| `u32::MAX / 8` (536,870,911) | `(u32::MAX / 4) * 3` | 75.0% |
| `2_000_000` | `u32::MAX - 3_999_999` | 99.9% |
Similarly, changes are still allowed to be between `MAX_CHANGE_AGE`-old and `u32::MAX`-old in the interim between `check_tick` scans. While we prevent their age from overflowing, the test to detect changes still compares raw values. This makes failure ultimately unreliable, since when ancient changes stop being detected varies depending on when the next scan occurs.
## Open Question
Currently, systems and system states are incorrectly initialized with their `last_change_tick` set to `0`, which doesn't handle wraparound correctly.
For consistent behavior, they should either be initialized to the world's `last_change_tick` (and detect no changes) or to `MAX_CHANGE_AGE` behind the world's current `change_tick` (and detect everything as a change). I've currently gone with the latter since that was closer to the existing behavior.
## Follow-up Work
(Edited: entire section)
We haven't actually profiled how long a `check_ticks` scan takes on a "large" `World` , so we don't know if it's safe to increase their frequency. However, we are currently relying on play sessions not lasting long enough to trigger a scan and apps not having enough entities/archetypes for it to be "expensive" (our assumption). That isn't a real solution. (Either scanning never costs enough to impact frame times or we provide an option to use `u64` change ticks. Nobody will accept random hiccups.)
To further extend the lifetime of changes, we actually only need to increment the world tick if a system has `Fetch: !ReadOnlySystemParamFetch`. The behavior will be identical because all writes are sequenced, but I'm not sure how to implement that in a way that the compiler can optimize the branch out.
Also, since having no false positives depends on a `check_ticks` scan running at least every `2 * N - 1` ticks, a `last_check_tick` should also be stored in the `World` so that any lull in system execution (like a command flush) could trigger a scan if needed. To be completely robust, all the systems initialized on the world should be scanned, not just those in the current stage.
2022-05-09 14:00:16 +00:00
|
|
|
assert!(ticks_since_insert == MAX_CHANGE_AGE);
|
|
|
|
assert!(ticks_since_change == MAX_CHANGE_AGE);
|
|
|
|
}
|
|
|
|
}
|
2022-07-25 16:11:29 +00:00
|
|
|
|
|
|
|
#[test]
|
|
|
|
fn mut_from_res_mut() {
|
|
|
|
let mut component_ticks = ComponentTicks {
|
2022-11-21 12:59:09 +00:00
|
|
|
added: Tick::new(1),
|
|
|
|
changed: Tick::new(2),
|
2022-07-25 16:11:29 +00:00
|
|
|
};
|
|
|
|
let ticks = Ticks {
|
2022-11-21 12:59:09 +00:00
|
|
|
added: &mut component_ticks.added,
|
|
|
|
changed: &mut component_ticks.changed,
|
2022-07-25 16:11:29 +00:00
|
|
|
last_change_tick: 3,
|
|
|
|
change_tick: 4,
|
|
|
|
};
|
|
|
|
let mut res = R {};
|
|
|
|
let res_mut = ResMut {
|
|
|
|
value: &mut res,
|
|
|
|
ticks,
|
|
|
|
};
|
|
|
|
|
|
|
|
let into_mut: Mut<R> = res_mut.into();
|
2022-11-21 12:59:09 +00:00
|
|
|
assert_eq!(1, into_mut.ticks.added.tick);
|
|
|
|
assert_eq!(2, into_mut.ticks.changed.tick);
|
2022-07-25 16:11:29 +00:00
|
|
|
assert_eq!(3, into_mut.ticks.last_change_tick);
|
|
|
|
assert_eq!(4, into_mut.ticks.change_tick);
|
|
|
|
}
|
|
|
|
|
|
|
|
#[test]
|
|
|
|
fn mut_from_non_send_mut() {
|
|
|
|
let mut component_ticks = ComponentTicks {
|
2022-11-21 12:59:09 +00:00
|
|
|
added: Tick::new(1),
|
|
|
|
changed: Tick::new(2),
|
2022-07-25 16:11:29 +00:00
|
|
|
};
|
|
|
|
let ticks = Ticks {
|
2022-11-21 12:59:09 +00:00
|
|
|
added: &mut component_ticks.added,
|
|
|
|
changed: &mut component_ticks.changed,
|
2022-07-25 16:11:29 +00:00
|
|
|
last_change_tick: 3,
|
|
|
|
change_tick: 4,
|
|
|
|
};
|
|
|
|
let mut res = R {};
|
|
|
|
let non_send_mut = NonSendMut {
|
|
|
|
value: &mut res,
|
|
|
|
ticks,
|
|
|
|
};
|
|
|
|
|
|
|
|
let into_mut: Mut<R> = non_send_mut.into();
|
2022-11-21 12:59:09 +00:00
|
|
|
assert_eq!(1, into_mut.ticks.added.tick);
|
|
|
|
assert_eq!(2, into_mut.ticks.changed.tick);
|
2022-07-25 16:11:29 +00:00
|
|
|
assert_eq!(3, into_mut.ticks.last_change_tick);
|
|
|
|
assert_eq!(4, into_mut.ticks.change_tick);
|
|
|
|
}
|
2022-10-10 17:06:31 +00:00
|
|
|
|
|
|
|
#[test]
|
|
|
|
fn map_mut() {
|
|
|
|
use super::*;
|
|
|
|
struct Outer(i64);
|
|
|
|
|
2022-11-21 12:59:09 +00:00
|
|
|
let (last_change_tick, change_tick) = (2, 3);
|
2022-10-10 17:06:31 +00:00
|
|
|
let mut component_ticks = ComponentTicks {
|
2022-11-21 12:59:09 +00:00
|
|
|
added: Tick::new(1),
|
|
|
|
changed: Tick::new(2),
|
2022-10-10 17:06:31 +00:00
|
|
|
};
|
|
|
|
let ticks = Ticks {
|
2022-11-21 12:59:09 +00:00
|
|
|
added: &mut component_ticks.added,
|
|
|
|
changed: &mut component_ticks.changed,
|
2022-10-10 17:06:31 +00:00
|
|
|
last_change_tick,
|
|
|
|
change_tick,
|
|
|
|
};
|
|
|
|
|
|
|
|
let mut outer = Outer(0);
|
|
|
|
let ptr = Mut {
|
|
|
|
value: &mut outer,
|
|
|
|
ticks,
|
|
|
|
};
|
|
|
|
assert!(!ptr.is_changed());
|
|
|
|
|
|
|
|
// Perform a mapping operation.
|
|
|
|
let mut inner = ptr.map_unchanged(|x| &mut x.0);
|
|
|
|
assert!(!inner.is_changed());
|
|
|
|
|
|
|
|
// Mutate the inner value.
|
|
|
|
*inner = 64;
|
|
|
|
assert!(inner.is_changed());
|
|
|
|
// Modifying one field of a component should flag a change for the entire component.
|
|
|
|
assert!(component_ticks.is_changed(last_change_tick, change_tick));
|
|
|
|
}
|
2022-12-11 19:24:19 +00:00
|
|
|
|
|
|
|
#[test]
|
|
|
|
fn set_if_neq() {
|
|
|
|
let mut world = World::new();
|
|
|
|
|
|
|
|
world.insert_resource(R2(0));
|
|
|
|
// Resources are Changed when first added
|
|
|
|
world.increment_change_tick();
|
|
|
|
// This is required to update world::last_change_tick
|
|
|
|
world.clear_trackers();
|
|
|
|
|
|
|
|
let mut r = world.resource_mut::<R2>();
|
|
|
|
assert!(!r.is_changed(), "Resource must begin unchanged.");
|
|
|
|
|
|
|
|
r.set_if_neq(R2(0));
|
|
|
|
assert!(
|
|
|
|
!r.is_changed(),
|
|
|
|
"Resource must not be changed after setting to the same value."
|
|
|
|
);
|
|
|
|
|
|
|
|
r.set_if_neq(R2(3));
|
|
|
|
assert!(
|
|
|
|
r.is_changed(),
|
|
|
|
"Resource must be changed after setting to a different value."
|
|
|
|
);
|
|
|
|
}
|
Make change lifespan deterministic and update docs (#3956)
## Objective
- ~~Make absurdly long-lived changes stay detectable for even longer (without leveling up to `u64`).~~
- Give all changes a consistent maximum lifespan.
- Improve code clarity.
## Solution
- ~~Increase the frequency of `check_tick` scans to increase the oldest reliably-detectable change.~~
(Deferred until we can benchmark the cost of a scan.)
- Ignore changes older than the maximum reliably-detectable age.
- General refactoring—name the constants, use them everywhere, and update the docs.
- Update test cases to check for the specified behavior.
## Related
This PR addresses (at least partially) the concerns raised in:
- #3071
- #3082 (and associated PR #3084)
## Background
- #1471
Given the minimum interval between `check_ticks` scans, `N`, the oldest reliably-detectable change is `u32::MAX - (2 * N - 1)` (or `MAX_CHANGE_AGE`). Reducing `N` from ~530 million (current value) to something like ~2 million would extend the lifetime of changes by a billion.
| minimum `check_ticks` interval | oldest reliably-detectable change | usable % of `u32::MAX` |
| --- | --- | --- |
| `u32::MAX / 8` (536,870,911) | `(u32::MAX / 4) * 3` | 75.0% |
| `2_000_000` | `u32::MAX - 3_999_999` | 99.9% |
Similarly, changes are still allowed to be between `MAX_CHANGE_AGE`-old and `u32::MAX`-old in the interim between `check_tick` scans. While we prevent their age from overflowing, the test to detect changes still compares raw values. This makes failure ultimately unreliable, since when ancient changes stop being detected varies depending on when the next scan occurs.
## Open Question
Currently, systems and system states are incorrectly initialized with their `last_change_tick` set to `0`, which doesn't handle wraparound correctly.
For consistent behavior, they should either be initialized to the world's `last_change_tick` (and detect no changes) or to `MAX_CHANGE_AGE` behind the world's current `change_tick` (and detect everything as a change). I've currently gone with the latter since that was closer to the existing behavior.
## Follow-up Work
(Edited: entire section)
We haven't actually profiled how long a `check_ticks` scan takes on a "large" `World` , so we don't know if it's safe to increase their frequency. However, we are currently relying on play sessions not lasting long enough to trigger a scan and apps not having enough entities/archetypes for it to be "expensive" (our assumption). That isn't a real solution. (Either scanning never costs enough to impact frame times or we provide an option to use `u64` change ticks. Nobody will accept random hiccups.)
To further extend the lifetime of changes, we actually only need to increment the world tick if a system has `Fetch: !ReadOnlySystemParamFetch`. The behavior will be identical because all writes are sequenced, but I'm not sure how to implement that in a way that the compiler can optimize the branch out.
Also, since having no false positives depends on a `check_ticks` scan running at least every `2 * N - 1` ticks, a `last_check_tick` should also be stored in the `World` so that any lull in system execution (like a command flush) could trigger a scan if needed. To be completely robust, all the systems initialized on the world should be scanned, not just those in the current stage.
2022-05-09 14:00:16 +00:00
|
|
|
}
|