bevy/crates/bevy_tasks
Ame 951c9bb1a2
Add [lints] table, fix adding #![allow(clippy::type_complexity)] everywhere (#10011)
# Objective

- Fix adding `#![allow(clippy::type_complexity)]` everywhere. like #9796

## Solution

- Use the new [lints] table that will land in 1.74
(https://doc.rust-lang.org/nightly/cargo/reference/unstable.html#lints)
- inherit lint to the workspace, crates and examples.
```
[lints]
workspace = true
```

## Changelog

- Bump rust version to 1.74
- Enable lints table for the workspace
```toml
[workspace.lints.clippy]
type_complexity = "allow"
```
- Allow type complexity for all crates and examples
```toml
[lints]
workspace = true
```

---------

Co-authored-by: Martín Maita <47983254+mnmaita@users.noreply.github.com>
2023-11-18 20:58:48 +00:00
..
examples small and mostly pointless refactoring (#2934) 2022-02-13 22:33:55 +00:00
src Add [lints] table, fix adding #![allow(clippy::type_complexity)] everywhere (#10011) 2023-11-18 20:58:48 +00:00
Cargo.toml Add [lints] table, fix adding #![allow(clippy::type_complexity)] everywhere (#10011) 2023-11-18 20:58:48 +00:00
README.md add and fix shields in Readmes (#9993) 2023-10-15 00:52:31 +00:00

Bevy Tasks

License Crates.io Downloads Docs Discord

A refreshingly simple task executor for bevy. :)

This is a simple threadpool with minimal dependencies. The main usecase is a scoped fork-join, i.e. spawning tasks from a single thread and having that thread await the completion of those tasks. This is intended specifically for bevy as a lighter alternative to rayon for this specific usecase. There are also utilities for generating the tasks from a slice of data. This library is intended for games and makes no attempt to ensure fairness or ordering of spawned tasks.

It is based on async-executor, a lightweight executor that allows the end user to manage their own threads. async-executor is based on async-task, a core piece of async-std.

Usage

In order to be able to optimize task execution in multi-threaded environments, bevy provides three different thread pools via which tasks of different kinds can be spawned. (The same API is used in single-threaded environments, even if execution is limited to a single thread. This currently applies to WASM targets.) The determining factor for what kind of work should go in each pool is latency requirements:

  • For CPU-intensive work (tasks that generally spin until completion) we have a standard [ComputeTaskPool] and an [AsyncComputeTaskPool]. Work that does not need to be completed to present the next frame should go to the [AsyncComputeTaskPool].

  • For IO-intensive work (tasks that spend very little time in a "woken" state) we have an [IoTaskPool] whose tasks are expected to complete very quickly. Generally speaking, they should just await receiving data from somewhere (i.e. disk) and signal other systems when the data is ready for consumption. (likely via channels)