Commit graph

97 commits

Author SHA1 Message Date
Carter Anderson
8ba9571eed
Release 0.11.0 (#9080)
I created this manually as Github didn't want to run CI for the
workflow-generated PR. I'm guessing we didn't hit this in previous
releases because we used bors.

Co-authored-by: Bevy Auto Releaser <41898282+github-actions[bot]@users.noreply.github.com>
2023-07-09 08:43:47 +00:00
James Liu
d33f5c759c
Add optional single-threaded feature to bevy_ecs/bevy_tasks (#6690)
# Objective
Fixes #6689.

## Solution
Add `single-threaded` as an optional non-default feature to `bevy_ecs`
and `bevy_tasks` that:
 
 - disable the `ParallelExecutor` as a default runner
 - disables the multi-threaded `TaskPool`
- internally replace `QueryParIter::for_each` calls with
`Query::for_each`.

Removed the `Mutex` and `Arc` usage in the single-threaded task pool.


![image](https://user-images.githubusercontent.com/3137680/202833253-dd2d520f-75e6-4c7b-be2d-5ce1523cbd38.png)

## Future Work/TODO
Create type aliases for `Mutex`, `Arc` that change to single-threaaded
equivalents where possible.

---

## Changelog
Added: Optional default feature `multi-theaded` to that enables
multithreaded parallelism in the engine. Disabling it disables all
multithreading in exchange for higher single threaded performance. Does
nothing on WASM targets.

---------

Co-authored-by: Carter Anderson <mcanders1@gmail.com>
2023-07-09 04:22:15 +00:00
François
fb148f7d65
remove some use of once_cell that can be replace with new std (#8739)
# Objective

- Some methods are stabilised with Rust 1.70
https://blog.rust-lang.org/2023/06/01/Rust-1.70.0.html#oncecell-and-oncelock

## Solution

- Remove `once_cell` when possible and use std instead
2023-06-01 21:55:18 +00:00
Wybe Westra
abf12f3b3b
Fixed several missing links in docs. (#8117)
Links in the api docs are nice. I noticed that there were several places
where structs / functions and other things were referenced in the docs,
but weren't linked. I added the links where possible / logical.

---------

Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
Co-authored-by: François <mockersf@gmail.com>
2023-04-23 17:28:36 +00:00
Jonah Henriksson
55e9ab7c92
Cleaned up panic messages (#8219)
# Objective

Fixes #8215 and #8152. When systems panic, it causes the main thread to
panic as well, which clutters the output.

## Solution

Resolves the panic in the multi-threaded scheduler. Also adds an extra
message that tells the user the system that panicked.

Using the example from the issue, here is what the messages now look
like:

```rust
use bevy::prelude::*;

fn main() {
    App::new()
        .add_plugins(DefaultPlugins)
        .add_systems(Update, panicking_system)
        .run();
}

fn panicking_system() {
    panic!("oooh scary");
}
```
### Before
```
   Compiling bevy_test v0.1.0 (E:\Projects\Rust\bevy_test)
    Finished dev [unoptimized + debuginfo] target(s) in 2m 58s
     Running `target\debug\bevy_test.exe`
2023-03-30T22:19:09.234932Z  INFO bevy_diagnostic::system_information_diagnostics_plugin::internal: SystemInfo { os: "Windows 10 Pro", kernel: "19044", cpu: "AMD Ryzen 5 2600 Six-Core Processor", core_count: "6", memory: "15.9 GiB" }
thread 'Compute Task Pool (5)' panicked at 'oooh scary', src\main.rs:11:5
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
thread 'Compute Task Pool (5)' panicked at 'A system has panicked so the executor cannot continue.: RecvError', E:\Projects\Rust\bevy\crates\bevy_ecs\src\schedule\executor\multi_threaded.rs:194:60
thread 'main' panicked at 'called `Option::unwrap()` on a `None` value', E:\Projects\Rust\bevy\crates\bevy_tasks\src\task_pool.rs:376:49
error: process didn't exit successfully: `target\debug\bevy_test.exe` (exit code: 101)
```
### After
```
   Compiling bevy_test v0.1.0 (E:\Projects\Rust\bevy_test)
    Finished dev [unoptimized + debuginfo] target(s) in 2.39s
     Running `target\debug\bevy_test.exe`
2023-03-30T22:11:24.748513Z  INFO bevy_diagnostic::system_information_diagnostics_plugin::internal: SystemInfo { os: "Windows 10 Pro", kernel: "19044", cpu: "AMD Ryzen 5 2600 Six-Core Processor", core_count: "6", memory: "15.9 GiB" }
thread 'Compute Task Pool (5)' panicked at 'oooh scary', src\main.rs:11:5
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
Encountered a panic in system `bevy_test::panicking_system`!
Encountered a panic in system `bevy_app::main_schedule::Main::run_main`!
error: process didn't exit successfully: `target\debug\bevy_test.exe` (exit code: 101)
```

---------

Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
Co-authored-by: François <mockersf@gmail.com>
2023-04-12 18:27:28 +00:00
Mikkel Rasmussen
e9312254d8
Non-breaking change* from UK spellings to US (#8291)
Fixes issue mentioned in PR #8285.

_Note: By mistake, this is currently dependent on #8285_
# Objective

Ensure consistency in the spelling of the documentation.

Exceptions:
`crates/bevy_mikktspace/src/generated.rs` - Has not been changed from
licence to license as it is part of a licensing agreement.

Maybe for further consistency,
https://github.com/bevyengine/bevy-website should also be given a look.

## Solution

### Changed the spelling of the current words (UK/CN/AU -> US) :
cancelled -> canceled (Breaking API changes in #8285)
behaviour -> behavior (Breaking API changes in #8285)
neighbour -> neighbor
grey -> gray
recognise -> recognize
centre -> center
metres -> meters
colour -> color

### ~~Update [`engine_style_guide.md`]~~ Moved to #8324 

---

## Changelog

Changed UK spellings in documentation to US

## Migration Guide

Non-breaking changes*

\* If merged after #8285
2023-04-08 16:22:46 +00:00
JoJoJet
3ead10a3e0
Suppress the clippy::type_complexity lint (#8313)
# Objective

The clippy lint `type_complexity` is known not to play well with bevy.
It frequently triggers when writing complex queries, and taking the
lint's advice of using a type alias almost always just obfuscates the
code with no benefit. Because of this, this lint is currently ignored in
CI, but unfortunately it still shows up when viewing bevy code in an
IDE.

As someone who's made a fair amount of pull requests to this repo, I
will say that this issue has been a consistent thorn in my side. Since
bevy code is filled with spurious, ignorable warnings, it can be very
difficult to spot the *real* warnings that must be fixed -- most of the
time I just ignore all warnings, only to later find out that one of them
was real after I'm done when CI runs.

## Solution

Suppress this lint in all bevy crates. This was previously attempted in
#7050, but the review process ended up making it more complicated than
it needs to be and landed on a subpar solution.

The discussion in https://github.com/rust-lang/rust-clippy/pull/10571
explores some better long-term solutions to this problem. Since there is
no timeline on when these solutions may land, we should resolve this
issue in the meantime by locally suppressing these lints.

### Unresolved issues

Currently, these lints are not suppressed in our examples, since that
would require suppressing the lint in every single source file. They are
still ignored in CI.
2023-04-06 21:27:36 +00:00
github-actions[bot]
6898351348
chore: Release (#7920)
Co-authored-by: Bevy Auto Releaser <41898282+github-actions[bot]@users.noreply.github.com>
2023-03-06 05:13:36 +00:00
github-actions[bot]
b44af49200 Release 0.10.0 (#7919)
Preparing next release
This PR has been auto-generated
2023-03-06 03:53:02 +00:00
github-actions[bot]
8eb67932f1 Bump Version after Release (#7918)
Bump version after release
This PR has been auto-generated
2023-03-06 02:10:30 +00:00
shuo
239b070674 Fix asset_debug_server hang. There should be at most one ThreadExecut… (#7825)
…or's ticker for one thread.

# Objective

- Fix debug_asset_server hang.

## Solution

- Reuse the thread_local executor for MainThreadExecutor resource, so there will be only one ThreadExecutor for main thread. 
- If ThreadTickers from same executor, they are conflict with each other. Then only tick one.
2023-03-02 08:40:25 +00:00
shuo
002c9d8b7f fix whitespaces in comment (#7853)
fix double whitespaces in comments. (I know it's dumb commit, while reading code, double spaces hurts a little... :P)
2023-03-01 10:20:56 +00:00
Rob Parrett
b39f83640f Fix some typos (#7763)
# Objective

Stumbled on a typo and went on a typo hunt.

## Solution

Fix em
2023-02-20 22:56:57 +00:00
Mike
51201ae54c improve safety comment in scope function (#7534)
# Objective

- While working on scope recently, I ran into a missing invariant for the transmutes in scope. The references passed into Scope are active for the rest of the scope function, but rust doesn't know this so it allows using the owned `spawned` and `scope` after `f` returns. 

## Solution

- Update the safety comment
- Shadow the owned values so they can't be used.
2023-02-13 18:20:17 +00:00
Alice Cecile
206c7ce219 Migrate engine to Schedule v3 (#7267)
Huge thanks to @maniwani, @devil-ira, @hymm, @cart, @superdump and @jakobhellermann for the help with this PR.

# Objective

- Followup #6587.
- Minimal integration for the Stageless Scheduling RFC: https://github.com/bevyengine/rfcs/pull/45

## Solution

- [x]  Remove old scheduling module
- [x] Migrate new methods to no longer use extension methods
- [x] Fix compiler errors
- [x] Fix benchmarks
- [x] Fix examples
- [x] Fix docs
- [x] Fix tests

## Changelog

### Added

- a large number of methods on `App` to work with schedules ergonomically
- the `CoreSchedule` enum
- `App::add_extract_system` via the `RenderingAppExtension` trait extension method
- the private `prepare_view_uniforms` system now has a public system set for scheduling purposes, called `ViewSet::PrepareUniforms`

### Removed

- stages, and all code that mentions stages
- states have been dramatically simplified, and no longer use a stack
- `RunCriteriaLabel`
- `AsSystemLabel` trait
- `on_hierarchy_reports_enabled` run criteria (now just uses an ad hoc resource checking run condition)
- systems in `RenderSet/Stage::Extract` no longer warn when they do not read data from the main world
- `RunCriteriaLabel`
- `transform_propagate_system_set`: this was a nonstandard pattern that didn't actually provide enough control. The systems are already `pub`: the docs have been updated to ensure that the third-party usage is clear.

### Changed

- `System::default_labels` is now `System::default_system_sets`.
- `App::add_default_labels` is now `App::add_default_sets`
- `CoreStage` and `StartupStage` enums are now `CoreSet` and `StartupSet`
- `App::add_system_set` was renamed to `App::add_systems`
- The `StartupSchedule` label is now defined as part of the `CoreSchedules` enum
-  `.label(SystemLabel)` is now referred to as `.in_set(SystemSet)`
- `SystemLabel` trait was replaced by `SystemSet`
- `SystemTypeIdLabel<T>` was replaced by `SystemSetType<T>`
- The `ReportHierarchyIssue` resource now has a public constructor (`new`), and implements `PartialEq`
- Fixed time steps now use a schedule (`CoreSchedule::FixedTimeStep`) rather than a run criteria.
- Adding rendering extraction systems now panics rather than silently failing if no subapp with the `RenderApp` label is found.
- the `calculate_bounds` system, with the `CalculateBounds` label, is now in `CoreSet::Update`, rather than in `CoreSet::PostUpdate` before commands are applied. 
- `SceneSpawnerSystem` now runs under `CoreSet::Update`, rather than `CoreStage::PreUpdate.at_end()`.
- `bevy_pbr::add_clusters` is no longer an exclusive system
- the top level `bevy_ecs::schedule` module was replaced with `bevy_ecs::scheduling`
- `tick_global_task_pools_on_main_thread` is no longer run as an exclusive system. Instead, it has been replaced by `tick_global_task_pools`, which uses a `NonSend` resource to force running on the main thread.

## Migration Guide

- Calls to `.label(MyLabel)` should be replaced with `.in_set(MySet)`
- Stages have been removed. Replace these with system sets, and then add command flushes using the `apply_system_buffers` exclusive system where needed.
- The `CoreStage`, `StartupStage, `RenderStage` and `AssetStage`  enums have been replaced with `CoreSet`, `StartupSet, `RenderSet` and `AssetSet`. The same scheduling guarantees have been preserved.
  - Systems are no longer added to `CoreSet::Update` by default. Add systems manually if this behavior is needed, although you should consider adding your game logic systems to `CoreSchedule::FixedTimestep` instead for more reliable framerate-independent behavior.
  - Similarly, startup systems are no longer part of `StartupSet::Startup` by default. In most cases, this won't matter to you.
  - For example, `add_system_to_stage(CoreStage::PostUpdate, my_system)` should be replaced with 
  - `add_system(my_system.in_set(CoreSet::PostUpdate)`
- When testing systems or otherwise running them in a headless fashion, simply construct and run a schedule using `Schedule::new()` and `World::run_schedule` rather than constructing stages
- Run criteria have been renamed to run conditions. These can now be combined with each other and with states.
- Looping run criteria and state stacks have been removed. Use an exclusive system that runs a schedule if you need this level of control over system control flow.
- For app-level control flow over which schedules get run when (such as for rollback networking), create your own schedule and insert it under the `CoreSchedule::Outer` label.
- Fixed timesteps are now evaluated in a schedule, rather than controlled via run criteria. The `run_fixed_timestep` system runs this schedule between `CoreSet::First` and `CoreSet::PreUpdate` by default.
- Command flush points introduced by `AssetStage` have been removed. If you were relying on these, add them back manually.
- Adding extract systems is now typically done directly on the main app. Make sure the `RenderingAppExtension` trait is in scope, then call `app.add_extract_system(my_system)`.
- the `calculate_bounds` system, with the `CalculateBounds` label, is now in `CoreSet::Update`, rather than in `CoreSet::PostUpdate` before commands are applied. You may need to order your movement systems to occur before this system in order to avoid system order ambiguities in culling behavior.
- the `RenderLabel` `AppLabel` was renamed to `RenderApp` for clarity
- `App::add_state` now takes 0 arguments: the starting state is set based on the `Default` impl.
- Instead of creating `SystemSet` containers for systems that run in stages, simply use `.on_enter::<State::Variant>()` or its `on_exit` or `on_update` siblings.
- `SystemLabel` derives should be replaced with `SystemSet`. You will also need to add the `Debug`, `PartialEq`, `Eq`, and `Hash` traits to satisfy the new trait bounds.
- `with_run_criteria` has been renamed to `run_if`. Run criteria have been renamed to run conditions for clarity, and should now simply return a bool.
- States have been dramatically simplified: there is no longer a "state stack". To queue a transition to the next state, call `NextState::set`

## TODO

- [x] remove dead methods on App and World
- [x] add `App::add_system_to_schedule` and `App::add_systems_to_schedule`
- [x] avoid adding the default system set at inappropriate times
- [x] remove any accidental cycles in the default plugins schedule
- [x] migrate benchmarks
- [x] expose explicit labels for the built-in command flush points
- [x] migrate engine code
- [x] remove all mentions of stages from the docs
- [x] verify docs for States
- [x] fix uses of exclusive systems that use .end / .at_start / .before_commands
- [x] migrate RenderStage and AssetStage
- [x] migrate examples
- [x] ensure that transform propagation is exported in a sufficiently public way (the systems are already pub)
- [x] ensure that on_enter schedules are run at least once before the main app
- [x] re-enable opt-in to execution order ambiguities
- [x] revert change to `update_bounds` to ensure it runs in `PostUpdate`
- [x] test all examples
  - [x] unbreak directional lights
  - [x] unbreak shadows (see 3d_scene, 3d_shape, lighting, transparaency_3d examples)
  - [x] game menu example shows loading screen and menu simultaneously
  - [x] display settings menu is a blank screen
  - [x] `without_winit` example panics
- [x] ensure all tests pass
  - [x] SubApp doc test fails
  - [x] runs_spawn_local tasks fails
  - [x] [Fix panic_when_hierachy_cycle test hanging](https://github.com/alice-i-cecile/bevy/pull/120)

## Points of Difficulty and Controversy

**Reviewers, please give feedback on these and look closely**

1.  Default sets, from the RFC, have been removed. These added a tremendous amount of implicit complexity and result in hard to debug scheduling errors. They're going to be tackled in the form of "base sets" by @cart in a followup.
2. The outer schedule controls which schedule is run when `App::update` is called.
3. I implemented `Label for `Box<dyn Label>` for our label types. This enables us to store schedule labels in concrete form, and then later run them. I ran into the same set of problems when working with one-shot systems. We've previously investigated this pattern in depth, and it does not appear to lead to extra indirection with nested boxes.
4. `SubApp::update` simply runs the default schedule once. This sucks, but this whole API is incomplete and this was the minimal changeset.
5. `time_system` and `tick_global_task_pools_on_main_thread` no longer use exclusive systems to attempt to force scheduling order
6. Implemetnation strategy for fixed timesteps
7. `AssetStage` was migrated to `AssetSet` without reintroducing command flush points. These did not appear to be used, and it's nice to remove these bottlenecks.
8. Migration of `bevy_render/lib.rs` and pipelined rendering. The logic here is unusually tricky, as we have complex scheduling requirements.

## Future Work (ideally before 0.10)

- Rename schedule_v3 module to schedule or scheduling
- Add a derive macro to states, and likely a `EnumIter` trait of some form
- Figure out what exactly to do with the "systems added should basically work by default" problem
- Improve ergonomics for working with fixed timesteps and states
- Polish FixedTime API to match Time
- Rebase and merge #7415
- Resolve all internal ambiguities (blocked on better tools, especially #7442)
- Add "base sets" to replace the removed default sets.
2023-02-06 02:04:50 +00:00
Mike
e1b0bbf5ed Stageless: add a method to scope to always run a task on the scope thread (#7415)
# Objective

- Currently exclusive systems and applying buffers run outside of the multithreaded executor and just calls the funtions on the thread the schedule is running on. Stageless changes this to run these using tasks in a scope. Specifically It uses `spawn_on_scope` to run these. For the render thread this is incorrect as calling `spawn_on_scope` there runs tasks on the main thread. It should instead run these on the render thread and only run nonsend systems on the main thread.
 
## Solution

- Add another executor to `Scope` for spawning tasks on the scope. `spawn_on_scope` now always runs the task on the thread the scope is running on. `spawn_on_external` spawns onto the external executor than is optionally passed in. If None is passed `spawn_on_external` will spawn onto the scope executor.
- Eventually this new machinery will be able to be removed. This will happen once a fix for removing NonSend resources from the world lands. So this is a temporary fix to support stageless.

---

## Changelog

- add a spawn_on_external method to allow spawning on the scope's thread or an external thread

## Migration Guide

> No migration guide. The main thread executor was introduced in pipelined rendering which was merged for 0.10. spawn_on_scope now behaves the same way as on 0.9.
2023-02-05 21:44:46 +00:00
Chris Ohk
3281aea5c2 Fix minor typos in code and docs (#7378)
# Objective

I found several words in code and docs are incorrect. This should be fixed.

## Solution

- Fix several minor typos

Co-authored-by: Chris Ohk <utilforever@gmail.com>
2023-01-27 12:12:53 +00:00
Mike
2027af4c54 Pipelined Rendering (#6503)
# Objective

- Implement pipelined rendering
- Fixes #5082
- Fixes #4718

## User Facing Description

Bevy now implements piplelined rendering! Pipelined rendering allows the app logic and rendering logic to run on different threads leading to large gains in performance.

![image](https://user-images.githubusercontent.com/2180432/202049871-3c00b801-58ab-448f-93fd-471e30aba55f.png)
*tracy capture of many_foxes example*

To use pipelined rendering, you just need to add the `PipelinedRenderingPlugin`. If you're using `DefaultPlugins` then it will automatically be added for you on all platforms except wasm. Bevy does not currently support multithreading on wasm which is needed for this feature to work. If you aren't using `DefaultPlugins` you can add the plugin manually.

```rust
use bevy::prelude::*;
use bevy::render::pipelined_rendering::PipelinedRenderingPlugin;

fn main() {
    App::new()
        // whatever other plugins you need
        .add_plugin(RenderPlugin)
        // needs to be added after RenderPlugin
        .add_plugin(PipelinedRenderingPlugin)
        .run();
}
```

If for some reason pipelined rendering needs to be removed. You can also disable the plugin the normal way.

```rust
use bevy::prelude::*;
use bevy::render::pipelined_rendering::PipelinedRenderingPlugin;

fn main() {
    App::new.add_plugins(DefaultPlugins.build().disable::<PipelinedRenderingPlugin>());
}
```

### A setup function was added to plugins

A optional plugin lifecycle function was added to the `Plugin trait`. This function is called after all plugins have been built, but before the app runner is called. This allows for some final setup to be done. In the case of pipelined rendering, the function removes the sub app from the main app and sends it to the render thread.

```rust
struct MyPlugin;
impl Plugin for MyPlugin {
    fn build(&self, app: &mut App) {
        
    }
    
    // optional function
    fn setup(&self, app: &mut App) {
        // do some final setup before runner is called
    }
}
```

### A Stage for Frame Pacing

In the `RenderExtractApp` there is a stage labelled `BeforeIoAfterRenderStart` that systems can be added to.  The specific use case for this stage is for a frame pacing system that can delay the start of main app processing in render bound apps to reduce input latency i.e. "frame pacing". This is not currently built into bevy, but exists as `bevy`

```text
|-------------------------------------------------------------------|
|         | BeforeIoAfterRenderStart | winit events | main schedule |
| extract |---------------------------------------------------------|
|         | extract commands | rendering schedule                   |
|-------------------------------------------------------------------|
```

### Small API additions

* `Schedule::remove_stage`
* `App::insert_sub_app`
* `App::remove_sub_app` 
* `TaskPool::scope_with_executor`

## Problems and Solutions

### Moving render app to another thread

Most of the hard bits for this were done with the render redo. This PR just sends the render app back and forth through channels which seems to work ok. I originally experimented with using a scope to run the render task. It was cuter, but that approach didn't allow render to start before i/o processing. So I switched to using channels. There is much complexity in the coordination that needs to be done, but it's worth it. By moving rendering during i/o processing the frame times should be much more consistent in render bound apps. See https://github.com/bevyengine/bevy/issues/4691.

### Unsoundness with Sending World with NonSend resources

Dropping !Send things on threads other than the thread they were spawned on is considered unsound. The render world doesn't have any nonsend resources. So if we tell the users to "pretty please don't spawn nonsend resource on the render world", we can avoid this problem.

More seriously there is this https://github.com/bevyengine/bevy/pull/6534 pr, which patches the unsoundness by aborting the app if a nonsend resource is dropped on the wrong thread. ~~That PR should probably be merged before this one.~~ For a longer term solution we have this discussion going https://github.com/bevyengine/bevy/discussions/6552.

### NonSend Systems in render world

The render world doesn't have any !Send resources, but it does have a non send system. While Window is Send, winit does have some API's that can only be accessed on the main thread. `prepare_windows` in the render schedule thus needs to be scheduled on the main thread. Currently we run nonsend systems by running them on the thread the TaskPool::scope runs on. When we move render to another thread this no longer works.

To fix this, a new `scope_with_executor` method was added that takes a optional `TheadExecutor` that can only be ticked on the thread it was initialized on. The render world then holds a `MainThreadExecutor` resource which can be passed to the scope in the parallel executor that it uses to spawn it's non send systems on. 

### Scopes executors between render and main should not share tasks

Since the render world and the app world share the `ComputeTaskPool`. Because `scope` has executors for the ComputeTaskPool a system from the main world could run on the render thread or a render system could run on the main thread. This can cause performance problems because it can delay a stage from finishing. See https://github.com/bevyengine/bevy/pull/6503#issuecomment-1309791442 for more details.

To avoid this problem, `TaskPool::scope` has been changed to not tick the ComputeTaskPool when it's used by the parallel executor. In the future when we move closer to the 1 thread to 1 logical core model we may want to overprovide threads, because the render and main app threads don't do much when executing the schedule.

## Performance

My machine is Windows 11, AMD Ryzen 5600x, RX 6600

### Examples

#### This PR with pipelining vs Main

> Note that these were run on an older version of main and the performance profile has probably changed due to optimizations

Seeing a perf gain from 29% on many lights to 7% on many sprites.

<html>
<body>
<!--StartFragment--><google-sheets-html-origin>

  | percent |   |   | Diff |   |   | Main |   |   | PR |   |  
-- | -- | -- | -- | -- | -- | -- | -- | -- | -- | -- | -- | --
tracy frame time | mean | median | sigma | mean | median | sigma | mean | median | sigma | mean | median | sigma
many foxes | 27.01% | 27.34% | -47.09% | 1.58 | 1.55 | -1.78 | 5.85 | 5.67 | 3.78 | 4.27 | 4.12 | 5.56
many lights | 29.35% | 29.94% | -10.84% | 3.02 | 3.03 | -0.57 | 10.29 | 10.12 | 5.26 | 7.27 | 7.09 | 5.83
many animated sprites | 13.97% | 15.69% | 14.20% | 3.79 | 4.17 | 1.41 | 27.12 | 26.57 | 9.93 | 23.33 | 22.4 | 8.52
3d scene | 25.79% | 26.78% | 7.46% | 0.49 | 0.49 | 0.15 | 1.9 | 1.83 | 2.01 | 1.41 | 1.34 | 1.86
many cubes | 11.97% | 11.28% | 14.51% | 1.93 | 1.78 | 1.31 | 16.13 | 15.78 | 9.03 | 14.2 | 14 | 7.72
many sprites | 7.14% | 9.42% | -85.42% | 1.72 | 2.23 | -6.15 | 24.09 | 23.68 | 7.2 | 22.37 | 21.45 | 13.35

<!--EndFragment-->
</body>
</html>

#### This PR with pipelining disabled vs Main

Mostly regressions here. I don't think this should be a problem as users that are disabling pipelined rendering are probably running single threaded and not using the parallel executor. The regression is probably mostly due to the switch to use `async_executor::run` instead of `try_tick` and also having one less thread to run systems on. I'll do a writeup on why switching to `run` causes regressions, so we can try to eventually fix it. Using try_tick causes issues when pipeline rendering is enable as seen [here](https://github.com/bevyengine/bevy/pull/6503#issuecomment-1380803518)

<html>
<body>
<!--StartFragment--><google-sheets-html-origin>

  | percent |   |   | Diff |   |   | Main |   |   | PR no pipelining |   |  
-- | -- | -- | -- | -- | -- | -- | -- | -- | -- | -- | -- | --
tracy frame time | mean | median | sigma | mean | median | sigma | mean | median | sigma | mean | median | sigma
many foxes | -3.72% | -4.42% | -1.07% | -0.21 | -0.24 | -0.04 | 5.64 | 5.43 | 3.74 | 5.85 | 5.67 | 3.78
many lights | 0.29% | -0.30% | 4.75% | 0.03 | -0.03 | 0.25 | 10.29 | 10.12 | 5.26 | 10.26 | 10.15 | 5.01
many animated sprites | 0.22% | 1.81% | -2.72% | 0.06 | 0.48 | -0.27 | 27.12 | 26.57 | 9.93 | 27.06 | 26.09 | 10.2
3d scene | -15.79% | -14.75% | -31.34% | -0.3 | -0.27 | -0.63 | 1.9 | 1.83 | 2.01 | 2.2 | 2.1 | 2.64
many cubes | -2.85% | -3.30% | 0.00% | -0.46 | -0.52 | 0 | 16.13 | 15.78 | 9.03 | 16.59 | 16.3 | 9.03
many sprites | 2.49% | 2.41% | 0.69% | 0.6 | 0.57 | 0.05 | 24.09 | 23.68 | 7.2 | 23.49 | 23.11 | 7.15

<!--EndFragment-->
</body>
</html>

### Benchmarks

Mostly the same except empty_systems has got a touch slower. The maybe_pipelining+1 column has the compute task pool with an extra thread over default added. This is because pipelining loses one thread over main to execute systems on, since the main thread no longer runs normal systems.

<details>
<summary>Click Me</summary>

```text
group                                                             main                                         maybe-pipelining+1
-----                                                             -------------------------                ------------------
busy_systems/01x_entities_03_systems                              1.07     30.7±1.32µs        ? ?/sec      1.00     28.6±1.35µs        ? ?/sec
busy_systems/01x_entities_06_systems                              1.10     52.1±1.10µs        ? ?/sec      1.00     47.2±1.08µs        ? ?/sec
busy_systems/01x_entities_09_systems                              1.00     74.6±1.36µs        ? ?/sec      1.00     75.0±1.93µs        ? ?/sec
busy_systems/01x_entities_12_systems                              1.03    100.6±6.68µs        ? ?/sec      1.00     98.0±1.46µs        ? ?/sec
busy_systems/01x_entities_15_systems                              1.11    128.5±3.53µs        ? ?/sec      1.00    115.5±1.02µs        ? ?/sec
busy_systems/02x_entities_03_systems                              1.16     50.4±2.56µs        ? ?/sec      1.00     43.5±3.00µs        ? ?/sec
busy_systems/02x_entities_06_systems                              1.00     87.1±1.27µs        ? ?/sec      1.05     91.5±7.15µs        ? ?/sec
busy_systems/02x_entities_09_systems                              1.04    139.9±6.37µs        ? ?/sec      1.00    134.0±1.06µs        ? ?/sec
busy_systems/02x_entities_12_systems                              1.05    179.2±3.47µs        ? ?/sec      1.00    170.1±3.17µs        ? ?/sec
busy_systems/02x_entities_15_systems                              1.01    219.6±3.75µs        ? ?/sec      1.00    218.1±2.55µs        ? ?/sec
busy_systems/03x_entities_03_systems                              1.10     70.6±2.33µs        ? ?/sec      1.00     64.3±0.69µs        ? ?/sec
busy_systems/03x_entities_06_systems                              1.02    130.2±3.11µs        ? ?/sec      1.00    128.0±1.34µs        ? ?/sec
busy_systems/03x_entities_09_systems                              1.00   195.0±10.11µs        ? ?/sec      1.00    194.8±1.41µs        ? ?/sec
busy_systems/03x_entities_12_systems                              1.01    261.7±4.05µs        ? ?/sec      1.00    259.8±4.11µs        ? ?/sec
busy_systems/03x_entities_15_systems                              1.00    318.0±3.04µs        ? ?/sec      1.06   338.3±20.25µs        ? ?/sec
busy_systems/04x_entities_03_systems                              1.00     82.9±0.63µs        ? ?/sec      1.02     84.3±0.63µs        ? ?/sec
busy_systems/04x_entities_06_systems                              1.01    181.7±3.65µs        ? ?/sec      1.00    179.8±1.76µs        ? ?/sec
busy_systems/04x_entities_09_systems                              1.04    265.0±4.68µs        ? ?/sec      1.00    255.3±1.98µs        ? ?/sec
busy_systems/04x_entities_12_systems                              1.00    335.9±3.00µs        ? ?/sec      1.05   352.6±15.84µs        ? ?/sec
busy_systems/04x_entities_15_systems                              1.00   418.6±10.26µs        ? ?/sec      1.08   450.2±39.58µs        ? ?/sec
busy_systems/05x_entities_03_systems                              1.07    114.3±0.95µs        ? ?/sec      1.00    106.9±1.52µs        ? ?/sec
busy_systems/05x_entities_06_systems                              1.08    229.8±2.90µs        ? ?/sec      1.00    212.3±4.18µs        ? ?/sec
busy_systems/05x_entities_09_systems                              1.03    329.3±1.99µs        ? ?/sec      1.00    319.2±2.43µs        ? ?/sec
busy_systems/05x_entities_12_systems                              1.06    454.7±6.77µs        ? ?/sec      1.00    430.1±3.58µs        ? ?/sec
busy_systems/05x_entities_15_systems                              1.03    554.6±6.15µs        ? ?/sec      1.00   538.4±23.87µs        ? ?/sec
contrived/01x_entities_03_systems                                 1.00     14.0±0.15µs        ? ?/sec      1.08     15.1±0.21µs        ? ?/sec
contrived/01x_entities_06_systems                                 1.04     28.5±0.37µs        ? ?/sec      1.00     27.4±0.44µs        ? ?/sec
contrived/01x_entities_09_systems                                 1.00     41.5±4.38µs        ? ?/sec      1.02     42.2±2.24µs        ? ?/sec
contrived/01x_entities_12_systems                                 1.06     55.9±1.49µs        ? ?/sec      1.00     52.6±1.36µs        ? ?/sec
contrived/01x_entities_15_systems                                 1.02     68.0±2.00µs        ? ?/sec      1.00     66.5±0.78µs        ? ?/sec
contrived/02x_entities_03_systems                                 1.03     25.2±0.38µs        ? ?/sec      1.00     24.6±0.52µs        ? ?/sec
contrived/02x_entities_06_systems                                 1.00     46.3±0.49µs        ? ?/sec      1.04     48.1±4.13µs        ? ?/sec
contrived/02x_entities_09_systems                                 1.02     70.4±0.99µs        ? ?/sec      1.00     68.8±1.04µs        ? ?/sec
contrived/02x_entities_12_systems                                 1.06     96.8±1.49µs        ? ?/sec      1.00     91.5±0.93µs        ? ?/sec
contrived/02x_entities_15_systems                                 1.02    116.2±0.95µs        ? ?/sec      1.00    114.2±1.42µs        ? ?/sec
contrived/03x_entities_03_systems                                 1.00     33.2±0.38µs        ? ?/sec      1.01     33.6±0.45µs        ? ?/sec
contrived/03x_entities_06_systems                                 1.00     62.4±0.73µs        ? ?/sec      1.01     63.3±1.05µs        ? ?/sec
contrived/03x_entities_09_systems                                 1.02     96.4±0.85µs        ? ?/sec      1.00     94.8±3.02µs        ? ?/sec
contrived/03x_entities_12_systems                                 1.01    126.3±4.67µs        ? ?/sec      1.00    125.6±2.27µs        ? ?/sec
contrived/03x_entities_15_systems                                 1.03    160.2±9.37µs        ? ?/sec      1.00    156.0±1.53µs        ? ?/sec
contrived/04x_entities_03_systems                                 1.02     41.4±3.39µs        ? ?/sec      1.00     40.5±0.52µs        ? ?/sec
contrived/04x_entities_06_systems                                 1.00     78.9±1.61µs        ? ?/sec      1.02     80.3±1.06µs        ? ?/sec
contrived/04x_entities_09_systems                                 1.02    121.8±3.97µs        ? ?/sec      1.00    119.2±1.46µs        ? ?/sec
contrived/04x_entities_12_systems                                 1.00    157.8±1.48µs        ? ?/sec      1.01    160.1±1.72µs        ? ?/sec
contrived/04x_entities_15_systems                                 1.00    197.9±1.47µs        ? ?/sec      1.08   214.2±34.61µs        ? ?/sec
contrived/05x_entities_03_systems                                 1.00     49.1±0.33µs        ? ?/sec      1.01     49.7±0.75µs        ? ?/sec
contrived/05x_entities_06_systems                                 1.00     95.0±0.93µs        ? ?/sec      1.00     94.6±0.94µs        ? ?/sec
contrived/05x_entities_09_systems                                 1.01    143.2±1.68µs        ? ?/sec      1.00    142.2±2.00µs        ? ?/sec
contrived/05x_entities_12_systems                                 1.00    191.8±2.03µs        ? ?/sec      1.01    192.7±7.88µs        ? ?/sec
contrived/05x_entities_15_systems                                 1.02    239.7±3.71µs        ? ?/sec      1.00    235.8±4.11µs        ? ?/sec
empty_systems/000_systems                                         1.01     47.8±0.67ns        ? ?/sec      1.00     47.5±2.02ns        ? ?/sec
empty_systems/001_systems                                         1.00  1743.2±126.14ns        ? ?/sec     1.01  1761.1±70.10ns        ? ?/sec
empty_systems/002_systems                                         1.01      2.2±0.04µs        ? ?/sec      1.00      2.2±0.02µs        ? ?/sec
empty_systems/003_systems                                         1.02      2.7±0.09µs        ? ?/sec      1.00      2.7±0.16µs        ? ?/sec
empty_systems/004_systems                                         1.00      3.1±0.11µs        ? ?/sec      1.00      3.1±0.24µs        ? ?/sec
empty_systems/005_systems                                         1.00      3.5±0.05µs        ? ?/sec      1.11      3.9±0.70µs        ? ?/sec
empty_systems/010_systems                                         1.00      5.5±0.12µs        ? ?/sec      1.03      5.7±0.17µs        ? ?/sec
empty_systems/015_systems                                         1.00      7.9±0.19µs        ? ?/sec      1.06      8.4±0.16µs        ? ?/sec
empty_systems/020_systems                                         1.00     10.4±1.25µs        ? ?/sec      1.02     10.6±0.18µs        ? ?/sec
empty_systems/025_systems                                         1.00     12.4±0.39µs        ? ?/sec      1.14     14.1±1.07µs        ? ?/sec
empty_systems/030_systems                                         1.00     15.1±0.39µs        ? ?/sec      1.05     15.8±0.62µs        ? ?/sec
empty_systems/035_systems                                         1.00     16.9±0.47µs        ? ?/sec      1.07     18.0±0.37µs        ? ?/sec
empty_systems/040_systems                                         1.00     19.3±0.41µs        ? ?/sec      1.05     20.3±0.39µs        ? ?/sec
empty_systems/045_systems                                         1.00     22.4±1.67µs        ? ?/sec      1.02     22.9±0.51µs        ? ?/sec
empty_systems/050_systems                                         1.00     24.4±1.67µs        ? ?/sec      1.01     24.7±0.40µs        ? ?/sec
empty_systems/055_systems                                         1.05     28.6±5.27µs        ? ?/sec      1.00     27.2±0.70µs        ? ?/sec
empty_systems/060_systems                                         1.02     29.9±1.64µs        ? ?/sec      1.00     29.3±0.66µs        ? ?/sec
empty_systems/065_systems                                         1.02     32.7±3.15µs        ? ?/sec      1.00     32.1±0.98µs        ? ?/sec
empty_systems/070_systems                                         1.00     33.0±1.42µs        ? ?/sec      1.03     34.1±1.44µs        ? ?/sec
empty_systems/075_systems                                         1.00     34.8±0.89µs        ? ?/sec      1.04     36.2±0.70µs        ? ?/sec
empty_systems/080_systems                                         1.00     37.0±1.82µs        ? ?/sec      1.05     38.7±1.37µs        ? ?/sec
empty_systems/085_systems                                         1.00     38.7±0.76µs        ? ?/sec      1.05     40.8±0.83µs        ? ?/sec
empty_systems/090_systems                                         1.00     41.5±1.09µs        ? ?/sec      1.04     43.2±0.82µs        ? ?/sec
empty_systems/095_systems                                         1.00     43.6±1.10µs        ? ?/sec      1.04     45.2±0.99µs        ? ?/sec
empty_systems/100_systems                                         1.00     46.7±2.27µs        ? ?/sec      1.03     48.1±1.25µs        ? ?/sec
```
</details>

## Migration Guide

### App `runner` and SubApp `extract` functions are now required to be Send 

This was changed to enable pipelined rendering. If this breaks your use case please report it as these new bounds might be able to be relaxed.

## ToDo

* [x] redo benchmarking
* [x] reinvestigate the perf of the try_tick -> run change for task pool scope
2023-01-19 23:45:46 +00:00
Mike
a13b6f8a05 Thread executor for running tasks on specific threads. (#7087)
# Objective

- Spawn tasks from other threads onto an async executor, but limit those tasks to run on a specific thread.
- This is a continuation of trying to break up some of the changes in pipelined rendering.
- Eventually this will be used to allow `NonSend` systems to run on the main thread in pipelined rendering #6503 and also to solve #6552.
- For this specific PR this allows for us to store a thread executor in a thread local, rather than recreating a scope executor for every scope which should save on a little work.

## Solution

- We create a Executor that does a runtime check for what thread it's on before creating a !Send ticker. The ticker is the only way for the executor to make progress.

---

## Changelog

- create a ThreadExecutor that can only be ticked on one thread.
2023-01-10 22:32:42 +00:00
Rob Parrett
3dd8b42f72 Fix various typos (#7096)
I stumbled across a typo in some docs. Fixed some more while I was in there.
2023-01-06 00:43:30 +00:00
James Liu
53a5bbe2d5 Add thread create/destroy callbacks to TaskPool (#6561)
# Objective
Fix #1991. Allow users to have a bit more control over the creation and finalization of the threads in `TaskPool`.

## Solution
Add new methods to `TaskPoolBuilder` that expose callbacks that are called to initialize and finalize each thread in the `TaskPool`.

Unlike the proposed solution in #1991, the callback is argument-less. If an an identifier is needed, `std:🧵:current` should provide that information easily.

Added a unit test to ensure that they're being called correctly.
2022-12-20 16:17:02 +00:00
ira
b7d6ee8c68 Update concurrent-queue to 2.0 (#6538)
Co-authored-by: devil-ira <justthecooldude@gmail.com>
2022-12-13 21:00:43 +00:00
Mike
8eb8ad5c4a await tasks to cancel (#6696)
# Objective

- Fixes https://github.com/bevyengine/bevy/issues/6603

## Solution

- `Task`s will cancel when dropped, but wait until they return Pending before they actually get canceled. That means that if a task panics, it's possible for that error to get propagated to the scope and the scope gets dropped, while scoped tasks in other threads are still running. This is a big problem since scoped task can hold life-timed values that are dropped as the scope is dropped leading to UB.

---

## Changelog

- changed `Scope` to use `FallibleTask` and await the cancellation of all remaining tasks when it's dropped.
2022-11-23 00:41:19 +00:00
James Liu
210979f631 Fix panicking on another scope (#6524)
# Objective
Fix #6453. 

## Solution
Use the solution mentioned in the issue by catching the unwind and dropping the error. Wrap the `executor.try_tick` calls with `std::catch::unwind`.

Ideally this would be moved outside of the hot loop, but the mut ref to the `spawned` future is not `UnwindSafe`.

This PR only addresses the bug, we can address the perf issues (should there be any) later.
2022-11-21 12:59:08 +00:00
github-actions[bot]
920543c824 Release 0.9.0 (#6568)
Preparing next release
This PR has been auto-generated
2022-11-12 20:01:29 +00:00
dataphract
1914a3f288 fix: explicitly specify required version of async-task (#6509)
# Objective

Attempting to build `bevy_tasks` produces the following error:

```
error[E0599]: no method named `is_finished` found for struct `async_executor::Task` in the current scope
  --> /[...]]/bevy/crates/bevy_tasks/src/task.rs:51:16
   |
51 |         self.0.is_finished()
   |                ^^^^^^^^^^^ method not found in `async_executor::Task<T>`

```

It looks like this was introduced along with `Task::is_finished`, which delegates to `async_task::Task::is_finished`. However, the latter was only introduced in `async-task` 4.2.0; `bevy_tasks` does not explicitly depend on `async-task` but on `async-executor` ^1.3.0, which in turn depends on `async-task` ^4.0.0.

## Solution

Add an explicit dependency on `async-task` ^4.2.0.
2022-11-07 21:43:07 +00:00
James Liu
54a1e51623 TaskPool Panic Handling (#6443)
# Objective
Right now, the `TaskPool` implementation allows panics to permanently kill worker threads upon panicking. This is currently non-recoverable without using a `std::panic::catch_unwind` in every scheduled task. This is poor ergonomics and even poorer developer experience. This is exacerbated by #2250 as these threads are global and cannot be replaced after initialization.

Removes the need for temporary fixes like #4998. Fixes #4996. Fixes #6081. Fixes #5285. Fixes #5054. Supersedes #2307.

## Solution

The current solution is to wrap `Executor::run` in `TaskPool` with a `catch_unwind`, and discarding the potential panic. This was taken straight from [smol](404c7bcc0a/src/spawn.rs (L44))'s current implementation. ~~However, this is not entirely ideal as:~~
 
 - ~~the signaled to the awaiting task. We would need to change `Task<T>` to use `async_task::FallibleTask` internally, and even then it doesn't signal *why* it panicked, just that it did.~~ (See below).
 - ~~no error is logged of any kind~~ (See below)
 - ~~it's unclear if it drops other tasks in the executor~~ (it does not)
 - ~~This allows the ECS parallel executor to keep chugging even though a system's task has been dropped. This inevitably leads to deadlock in the executor.~~ Assuming we don't catch the unwind in ParallelExecutor, this will naturally kill the main thread.

### Alternatives
A final solution likely will incorporate elements of any or all of the following.

#### ~~Log and Ignore~~
~~Log the panic, drop the task, keep chugging. This only addresses the discoverability of the panic. The process will continue to run, probably deadlocking the executor. tokio's detatched tasks operate in this fashion.~~

Panics already do this by default, even when caught by `catch_unwind`.

#### ~~`catch_unwind` in `ParallelExecutor`~~
~~Add another layer catching system-level panics into the `ParallelExecutor`. How the executor continues when a core dependency of many systems fails to run is up for debate.~~

`async_task::Task`  bubbles up panics already, this will transitively push panics all the way to the main thread.

#### ~~Emulate/Copy `tokio::JoinHandle` with `Task<T>`~~
~~`tokio::JoinHandle<T>` bubbles up the panic from the underlying task when awaited. This can be transitively applied across other APIs that also use `Task<T>` like `Query::par_for_each` and `TaskPool::scope`, bubbling up the panic until it's either caught or it reaches the main thread.~~

`async_task::Task`  bubbles up panics already, this will transitively push panics all the way to the main thread.

#### Abort on Panic
The nuclear option. Log the error, abort the entire process on any thread in the task pool panicking. Definitely avoids any additional infrastructure for passing the panic around, and might actually lead to more efficient code as any unwinding is optimized out. However gives the developer zero options for dealing with the issue, a seemingly poor choice for debuggability, and prevents graceful shutdown of the process. Potentially an option for handling very low-level task management (a la #4740). Roughly takes the shape of:

```rust
struct AbortOnPanic;

impl Drop for AbortOnPanic {
   fn drop(&mut self) {
     abort!();
   }
}

let guard = AbortOnPanic;
// Run task
std::mem::forget(AbortOnPanic);
```

---

## Changelog

Changed: `bevy_tasks::TaskPool`'s threads  will no longer terminate permanently when a task scheduled onto them panics.
Changed: `bevy_tasks::Task` and`bevy_tasks::Scope` will propagate panics in the spawned tasks/scopes to the parent thread.
2022-11-02 23:40:08 +00:00
Alice Cecile
334e09892b Revert "Show prelude re-exports in docs (#6448)" (#6449)
This reverts commit 53d387f340.

# Objective

Reverts #6448. This didn't have the intended effect: we're now getting bevy::prelude shown in the docs again.

Co-authored-by: Alejandro Pascual <alejandro.pascual.pozo@gmail.com>
2022-11-02 20:40:45 +00:00
Alejandro Pascual
53d387f340 Show prelude re-exports in docs (#6448)
# Objective

- Right now re-exports are completely hidden in prelude docs.
- Fixes #6433

## Solution

- We could show the re-exports without inlining their documentation.
2022-11-02 19:35:06 +00:00
targrub
b672465047 Fix doctest warnings (#6447)
# Objective

- Fixes doctest warnings from upcoming Rust release.

` cargo doc --workspace --all-features --no-deps --document-private-items` using `beta-x86_64-pc-windows-msvc (default)
rustc 1.66.0-beta.1 (e080cc5a6 2022-11-01)` was giving warnings on a few comments.

## Solution

- Quoted the Rust code parts.
2022-11-02 16:47:40 +00:00
BeastLe9enD
ed3ecda91d Add is_finished to Task<T> (#6444)
# Objective

In some scenarios it can be useful to check if a task has been finished without polling it. I added a function called `is_finished` to check if a task has been finished.

## Solution

Since `async_task` supports it out of the box, it is just a simple wrapper function.

---
2022-11-02 12:27:22 +00:00
Jakob Hellermann
e71c4d2802 fix nightly clippy warnings (#6395)
# Objective

- fix new clippy lints before they get stable and break CI

## Solution

- run `clippy --fix` to auto-fix machine-applicable lints
- silence `clippy::should_implement_trait` for `fn HandleId::default<T: Asset>`

## Changes
- always prefer `format!("{inline}")` over `format!("{}", not_inline)`
- prefer `Box::default` (or `Box::<T>::default` if necessary) over `Box::new(T::default())`
2022-10-28 21:03:01 +00:00
targrub
c18b1a839b Prepare for upcoming rustlang by fixing upcoming clippy warnings (#6376)
# Objective

- Proactive changing of code to comply with warnings generated by beta of rustlang version of cargo clippy.

## Solution

- Code changed as recommended by `rustup update`, `rustup default beta`, `cargo run -p ci -- clippy`.
- Tested using `beta` and `stable`.  No clippy warnings in either after changes made.

---

## Changelog

- Warnings fixed were: `clippy::explicit-auto-deref` (present in 11 files), `clippy::needless-borrow` (present in 2 files), and `clippy::only-used-in-recursion` (only 1 file).
2022-10-26 19:15:15 +00:00
Mike
0f3f628c48 tick local executor (#6121)
# Objective

- #4466 broke local tasks running.
- Fixes https://github.com/bevyengine/bevy/issues/6120

## Solution

- Add system for ticking local executors on main thread into bevy_core where the tasks pools are initialized.
- Add ticking local executors into thread executors

## Changelog

- tick all thread local executors in task pool.

## Notes

- ~~Not 100% sure about this PR. Ticking the local executor for the main thread in scope feels a little kludgy as it requires users of bevy_tasks to be calling scope periodically for those tasks to make progress.~~ took this out in favor of a system that ticks the local executors.
2022-10-24 13:46:40 +00:00
Mike
48e9dc1964 fix failing doc test and clear up docs (#6314)
# Objective

Fixes https://github.com/bevyengine/bevy/issues/6306

## Solution
 Change the failing assert and expand example to explain when ordering is deterministic or not.

Co-authored-by: Mike Hsu <mike.hsu@gmail.com>
2022-10-20 20:23:57 +00:00
Mike
d22d310ad5 Nested spawns on scope (#4466)
# Objective

- Add ability to create nested spawns. This is needed for stageless. The current executor spawns tasks for each system early and runs the system by communicating through a channel. In stageless we want to spawn the task late, so that archetypes can be updated right before the task is run. The executor is run on a separate task, so this enables the scope to be passed to the spawned executor.
- Fixes #4301

## Solution

- Instantiate a single threaded executor on the scope and use that instead of the LocalExecutor. This allows the scope to be Send, but still able to spawn tasks onto the main thread the scope is run on. This works because while systems can access nonsend data. The systems themselves are Send. Because of this change we lose the ability to spawn nonsend tasks on the scope, but I don't think this is being used anywhere. Users would still be able to use spawn_local on TaskPools.
- Steals the lifetime tricks the `std:🧵:scope` uses to allow nested spawns, but disallow scope to be passed to tasks or threads not associated with the scope.
- Change the storage for the tasks to a `ConcurrentQueue`. This is to allow a &Scope to be passed for spawning instead of a &mut Scope. `ConcurrentQueue` was chosen because it was already in our dependency tree because `async_executor` depends on it.
- removed the optimizations for 0 and 1 spawned tasks. It did improve those cases, but made the cases of more than 1 task slower.
---

## Changelog

Add ability to nest spawns

```rust
fn main() {
    let pool = TaskPool::new();
    pool.scope(|scope| {
        scope.spawn(async move {
            // calling scope.spawn from an spawn task was not possible before
            scope.spawn(async move {
                // do something
            });
        });
    })
}
```

## Migration Guide

If you were using explicit lifetimes and Passing Scope you'll need to specify two lifetimes now.

```rust
fn scoped_function<'scope>(scope: &mut Scope<'scope, ()>) {}
// should become
fn scoped_function<'scope>(scope: &Scope<'_, 'scope, ()>) {}
```

`scope.spawn_local` changed to `scope.spawn_on_scope` this should cover cases where you needed to run tasks on the local thread, but does not cover spawning Nonsend Futures.

## TODO
* [x] think real hard about all the lifetimes
* [x] add doc about what 'env and 'scope mean.
* [x] manually check that the single threaded task pool still works
* [x] Get updated perf numbers
* [x] check and make sure all the transmutes are necessary
* [x] move commented out test into a compile fail test
* [x] look through the tests for scope on std and see if I should add any more tests

Co-authored-by: Michael Hsu <myhsu@benjaminelectric.com>
Co-authored-by: Carter Anderson <mcanders1@gmail.com>
2022-09-28 01:59:10 +00:00
ira
527dce9a69 Mark Task as #[must_use] (#6068)
The `async_executor::Task` that it wraps is also `#[must_use]` with the same message.

Co-authored-by: devil-ira <justthecooldude@gmail.com>
2022-09-22 17:21:16 +00:00
James Liu
f2ad11104d Swap out num_cpus for std:🧵:available_parallelism (#4970)
# Objective
As of Rust 1.59, `std:🧵:available_parallelism` has been stabilized. As of Rust 1.61, the API matches `num_cpus::get` by properly handling Linux's cgroups and other sandboxing mechanisms.

As bevy does not have an established MSRV, we can replace `num_cpus` in `bevy_tasks` and reduce our dependency tree by one dep.

## Solution
Replace `num_cpus` with `std:🧵:available_parallelism`. Wrap it to have a fallback in the case it errors out and have it operate in the same manner as `num_cpus` did.

This however removes `physical_core_count` from the API, though we are currently not using it in any way in first-party crates.

---

## Changelog
Changed: `bevy_tasks::logical_core_count` -> `bevy_tasks::available_parallelism`.
Removed: `bevy_tasks::physical_core_count`.

## Migration Guide
`bevy_tasks::logical_core_count` and `bevy_tasks::physical_core_count` have been removed. `logical_core_count` has been replaced with `bevy_tasks::available_parallelism`, which works identically. If `bevy_tasks::physical_core_count` is required, the `num_cpus` crate can be used directly, as these two were just aliases for `num_cpus` APIs.
2022-09-19 15:46:03 +00:00
targrub
d0e294c86b Query filter types must be ReadOnlyWorldQuery (#6008)
# Objective

Fixes Issue #6005.

## Solution

Replaced WorldQuery with ReadOnlyWorldQuery on F generic in Query filters and QueryState to restrict its trait bound.

## Migration Guide

Query filter (`F`) generics are now bound by `ReadOnlyWorldQuery`, rather than `WorldQuery`. If for some reason you were requesting `Query<&A, &mut B>`, please use `Query<&A, With<B>>` instead.
2022-09-18 23:52:01 +00:00
Squirrel
43f9271d13 unused dep references? (#5954)
Got a hunch ( from undepend ) that these are not needed.
2022-09-18 15:49:49 +00:00
github-actions[bot]
444150025d Bump Version after Release (#5576)
Bump version after release
This PR has been auto-generated
2022-08-05 02:03:05 +00:00
github-actions[bot]
856588ed7c Release 0.8.0 (#5490)
Preparing next release
This PR has been auto-generated
2022-07-30 14:07:30 +00:00
CGMossa
93a131661d Very minor doc formatting changes (#5287)
# Objective

- Added a bunch of backticks to things that should have them, like equations, abstract variable names,
- Changed all small x, y, and z to capitals X, Y, Z.

This might be more annoying than helpful; Feel free to refuse this PR.
2022-07-12 13:06:16 +00:00
Ralf Jung
a32eac825a Miri can set thread names now (#5108)
# Objective

https://github.com/rust-lang/miri/issues/1717 has been fixed so we can set thread names in Miri now.

## Solution

We set thread names in Miri.
2022-06-26 21:28:00 +00:00
James Liu
012ae07dc8 Add global init and get accessors for all newtyped TaskPools (#2250)
Right now, a direct reference to the target TaskPool is required to launch tasks on the pools, despite the three newtyped pools (AsyncComputeTaskPool, ComputeTaskPool, and IoTaskPool) effectively acting as global instances. The need to pass a TaskPool reference adds notable friction to spawning subtasks within existing tasks. Possible use cases for this may include chaining tasks within the same pool like spawning separate send/receive I/O tasks after waiting on a network connection to be established, or allowing cross-pool dependent tasks like starting dependent multi-frame computations following a long I/O load. 

Other task execution runtimes provide static access to spawning tasks (i.e. `tokio::spawn`), which is notably easier to use than the reference passing required by `bevy_tasks` right now.

This PR makes does the following:

 * Adds `*TaskPool::init` which initializes a `OnceCell`'ed with a provided TaskPool. Failing if the pool has already been initialized.
 * Adds `*TaskPool::get` which fetches the initialized global pool of the respective type or panics. This generally should not be an issue in normal Bevy use, as the pools are initialized before they are accessed.
 * Updated default task pool initialization to either pull the global handles and save them as resources, or if they are already initialized, pull the a cloned global handle as the resource.

This should make it notably easier to build more complex task hierarchies for dependent tasks. It should also make writing bevy-adjacent, but not strictly bevy-only plugin crates easier, as the global pools ensure it's all running on the same threads.

One alternative considered is keeping a thread-local reference to the pool for all threads in each pool to enable the same `tokio::spawn` interface. This would spawn tasks on the same pool that a task is currently running in. However this potentially leads to potential footgun situations where long running blocking tasks run on `ComputeTaskPool`.
2022-06-09 02:43:24 +00:00
MiniaczQ
4e547ded11 Remove unused CountdownEvent (#4290)
# Objective

Fixes:
#4287 

## Solution

I removed it.
2022-04-26 21:20:12 +00:00
François
1cd17e903f document the single threaded wasm task pool (#4571)
# Objective

- The single threaded task pool is not documented
- This doesn't warn in CI as it's feature gated for wasm, but I'm tired of seeing the warnings when building in wasm

## Solution

- Document it
2022-04-24 22:57:04 +00:00
Yutao Yuan
8d67832dfa Bump Bevy to 0.8.0-dev (#4505)
# Objective

We should bump our version to 0.8.0-dev after releasing 0.7.0, according to our release checklist.

## Solution

Do it.
2022-04-17 23:04:52 +00:00
Carter Anderson
83c6ffb73c release 0.7.0 (#4487) 2022-04-15 18:05:37 +00:00
Boxy
28ba87e6c8 CI runs cargo miri test -p bevy_ecs (#4310)
# Objective

Fixes #1529
Run bevy_ecs in miri

## Solution

- Don't set thread names when running in miri rust-lang/miri/issues/1717
- Update `event-listener` to `2.5.2` as previous versions have UB that is detected by miri: [event-listener commit](1fa31c553e)
- Ignore memory leaks when running in miri as they are impossible to track down rust-lang/miri/issues/1481
- Make `table_add_remove_many` test less "many" because miri is really quite slow :)
- Make CI run `RUSTFLAGS="-Zrandomize-layout" MIRIFLAGS="-Zmiri-ignore-leaks -Zmiri-tag-raw-pointers -Zmiri-disable-isolation" cargo +nightly miri test -p bevy_ecs`
2022-03-25 00:26:07 +00:00