mirror of
https://github.com/bevyengine/bevy
synced 2024-11-25 06:00:20 +00:00
Multiple Asset Sources (#9885)
This adds support for **Multiple Asset Sources**. You can now register a named `AssetSource`, which you can load assets from like you normally would: ```rust let shader: Handle<Shader> = asset_server.load("custom_source://path/to/shader.wgsl"); ``` Notice that `AssetPath` now supports `some_source://` syntax. This can now be accessed through the `asset_path.source()` accessor. Asset source names _are not required_. If one is not specified, the default asset source will be used: ```rust let shader: Handle<Shader> = asset_server.load("path/to/shader.wgsl"); ``` The behavior of the default asset source has not changed. Ex: the `assets` folder is still the default. As referenced in #9714 ## Why? **Multiple Asset Sources** enables a number of often-asked-for scenarios: * **Loading some assets from other locations on disk**: you could create a `config` asset source that reads from the OS-default config folder (not implemented in this PR) * **Loading some assets from a remote server**: you could register a new `remote` asset source that reads some assets from a remote http server (not implemented in this PR) * **Improved "Binary Embedded" Assets**: we can use this system for "embedded-in-binary assets", which allows us to replace the old `load_internal_asset!` approach, which couldn't support asset processing, didn't support hot-reloading _well_, and didn't make embedded assets accessible to the `AssetServer` (implemented in this pr) ## Adding New Asset Sources An `AssetSource` is "just" a collection of `AssetReader`, `AssetWriter`, and `AssetWatcher` entries. You can configure new asset sources like this: ```rust app.register_asset_source( "other", AssetSource::build() .with_reader(|| Box::new(FileAssetReader::new("other"))) ) ) ``` Note that `AssetSource` construction _must_ be repeatable, which is why a closure is accepted. `AssetSourceBuilder` supports `with_reader`, `with_writer`, `with_watcher`, `with_processed_reader`, `with_processed_writer`, and `with_processed_watcher`. Note that the "asset source" system replaces the old "asset providers" system. ## Processing Multiple Sources The `AssetProcessor` now supports multiple asset sources! Processed assets can refer to assets in other sources and everything "just works". Each `AssetSource` defines an unprocessed and processed `AssetReader` / `AssetWriter`. Currently this is all or nothing for a given `AssetSource`. A given source is either processed or it is not. Later we might want to add support for "lazy asset processing", where an `AssetSource` (such as a remote server) can be configured to only process assets that are directly referenced by local assets (in order to save local disk space and avoid doing extra work). ## A new `AssetSource`: `embedded` One of the big features motivating **Multiple Asset Sources** was improving our "embedded-in-binary" asset loading. To prove out the **Multiple Asset Sources** implementation, I chose to build a new `embedded` `AssetSource`, which replaces the old `load_interal_asset!` system. The old `load_internal_asset!` approach had a number of issues: * The `AssetServer` was not aware of (or capable of loading) internal assets. * Because internal assets weren't visible to the `AssetServer`, they could not be processed (or used by assets that are processed). This would prevent things "preprocessing shaders that depend on built in Bevy shaders", which is something we desperately need to start doing. * Each "internal asset" needed a UUID to be defined in-code to reference it. This was very manual and toilsome. The new `embedded` `AssetSource` enables the following pattern: ```rust // Called in `crates/bevy_pbr/src/render/mesh.rs` embedded_asset!(app, "mesh.wgsl"); // later in the app let shader: Handle<Shader> = asset_server.load("embedded://bevy_pbr/render/mesh.wgsl"); ``` Notice that this always treats the crate name as the "root path", and it trims out the `src` path for brevity. This is generally predictable, but if you need to debug you can use the new `embedded_path!` macro to get a `PathBuf` that matches the one used by `embedded_asset`. You can also reference embedded assets in arbitrary assets, such as WGSL shaders: ```rust #import "embedded://bevy_pbr/render/mesh.wgsl" ``` This also makes `embedded` assets go through the "normal" asset lifecycle. They are only loaded when they are actually used! We are also discussing implicitly converting asset paths to/from shader modules, so in the future (not in this PR) you might be able to load it like this: ```rust #import bevy_pbr::render::mesh::Vertex ``` Compare that to the old system! ```rust pub const MESH_SHADER_HANDLE: Handle<Shader> = Handle::weak_from_u128(3252377289100772450); load_internal_asset!(app, MESH_SHADER_HANDLE, "mesh.wgsl", Shader::from_wgsl); // The mesh asset is the _only_ accessible via MESH_SHADER_HANDLE and _cannot_ be loaded via the AssetServer. ``` ## Hot Reloading `embedded` You can enable `embedded` hot reloading by enabling the `embedded_watcher` cargo feature: ``` cargo run --features=embedded_watcher ``` ## Improved Hot Reloading Workflow First: the `filesystem_watcher` cargo feature has been renamed to `file_watcher` for brevity (and to match the `FileAssetReader` naming convention). More importantly, hot asset reloading is no longer configured in-code by default. If you enable any asset watcher feature (such as `file_watcher` or `rust_source_watcher`), asset watching will be automatically enabled. This removes the need to _also_ enable hot reloading in your app code. That means you can replace this: ```rust app.add_plugins(DefaultPlugins.set(AssetPlugin::default().watch_for_changes())) ``` with this: ```rust app.add_plugins(DefaultPlugins) ``` If you want to hot reload assets in your app during development, just run your app like this: ``` cargo run --features=file_watcher ``` This means you can use the same code for development and deployment! To deploy an app, just don't include the watcher feature ``` cargo build --release ``` My intent is to move to this approach for pretty much all dev workflows. In a future PR I would like to replace `AssetMode::ProcessedDev` with a `runtime-processor` cargo feature. We could then group all common "dev" cargo features under a single `dev` feature: ```sh # this would enable file_watcher, embedded_watcher, runtime-processor, and more cargo run --features=dev ``` ## AssetMode `AssetPlugin::Unprocessed`, `AssetPlugin::Processed`, and `AssetPlugin::ProcessedDev` have been replaced with an `AssetMode` field on `AssetPlugin`. ```rust // before app.add_plugins(DefaultPlugins.set(AssetPlugin::Processed { /* fields here */ }) // after app.add_plugins(DefaultPlugins.set(AssetPlugin { mode: AssetMode::Processed, ..default() }) ``` This aligns `AssetPlugin` with our other struct-like plugins. The old "source" and "destination" `AssetProvider` fields in the enum variants have been replaced by the "asset source" system. You no longer need to configure the AssetPlugin to "point" to custom asset providers. ## AssetServerMode To improve the implementation of **Multiple Asset Sources**, `AssetServer` was made aware of whether or not it is using "processed" or "unprocessed" assets. You can check that like this: ```rust if asset_server.mode() == AssetServerMode::Processed { /* do something */ } ``` Note that this refactor should also prepare the way for building "one to many processed output files", as it makes the server aware of whether it is loading from processed or unprocessed sources. Meaning we can store and read processed and unprocessed assets differently! ## AssetPath can now refer to folders The "file only" restriction has been removed from `AssetPath`. The `AssetServer::load_folder` API now accepts an `AssetPath` instead of a `Path`, meaning you can load folders from other asset sources! ## Improved AssetPath Parsing AssetPath parsing was reworked to support sources, improve error messages, and to enable parsing with a single pass over the string. `AssetPath::new` was replaced by `AssetPath::parse` and `AssetPath::try_parse`. ## AssetWatcher broken out from AssetReader `AssetReader` is no longer responsible for constructing `AssetWatcher`. This has been moved to `AssetSourceBuilder`. ## Duplicate Event Debouncing Asset V2 already debounced duplicate filesystem events, but this was _input_ events. Multiple input event types can produce the same _output_ `AssetSourceEvent`. Now that we have `embedded_watcher`, which does expensive file io on events, it made sense to debounce output events too, so I added that! This will also benefit the AssetProcessor by preventing integrity checks for duplicate events (and helps keep the noise down in trace logs). ## Next Steps * **Port Built-in Shaders**: Currently the primary (and essentially only) user of `load_interal_asset` in Bevy's source code is "built-in shaders". I chose not to do that in this PR for a few reasons: 1. We need to add the ability to pass shader defs in to shaders via meta files. Some shaders (such as MESH_VIEW_TYPES) need to pass shader def values in that are defined in code. 2. We need to revisit the current shader module naming system. I think we _probably_ want to imply modules from source structure (at least by default). Ideally in a way that can losslessly convert asset paths to/from shader modules (to enable the asset system to resolve modules using the asset server). 3. I want to keep this change set minimal / get this merged first. * **Deprecate `load_internal_asset`**: we can't do that until we do (1) and (2) * **Relative Asset Paths**: This PR significantly increases the need for relative asset paths (which was already pretty high). Currently when loading dependencies, it is assumed to be an absolute path, which means if in an `AssetLoader` you call `context.load("some/path/image.png")` it will assume that is the "default" asset source, _even if the current asset is in a different asset source_. This will cause breakage for AssetLoaders that are not designed to add the current source to whatever paths are being used. AssetLoaders should generally not need to be aware of the name of their current asset source, or need to think about the "current asset source" generally. We should build apis that support relative asset paths and then encourage using relative paths as much as possible (both via api design and docs). Relative paths are also important because they will allow developers to move folders around (even across providers) without reprocessing, provided there is no path breakage.
This commit is contained in:
parent
9290674060
commit
35073cf7aa
33 changed files with 2127 additions and 1047 deletions
|
@ -245,7 +245,10 @@ shader_format_spirv = ["bevy_internal/shader_format_spirv"]
|
|||
webgl2 = ["bevy_internal/webgl"]
|
||||
|
||||
# Enables watching the filesystem for Bevy Asset hot-reloading
|
||||
filesystem_watcher = ["bevy_internal/filesystem_watcher"]
|
||||
file_watcher = ["bevy_internal/file_watcher"]
|
||||
|
||||
# Enables watching in memory asset providers for Bevy Asset hot-reloading
|
||||
embedded_watcher = ["bevy_internal/embedded_watcher"]
|
||||
|
||||
[dependencies]
|
||||
bevy_dylib = { path = "crates/bevy_dylib", version = "0.12.0-dev", default-features = false, optional = true }
|
||||
|
@ -1065,6 +1068,7 @@ wasm = true
|
|||
name = "hot_asset_reloading"
|
||||
path = "examples/asset/hot_asset_reloading.rs"
|
||||
doc-scrape-examples = true
|
||||
required-features = ["file_watcher"]
|
||||
|
||||
[package.metadata.example.hot_asset_reloading]
|
||||
name = "Hot Reloading of Assets"
|
||||
|
@ -1076,7 +1080,7 @@ wasm = true
|
|||
name = "asset_processing"
|
||||
path = "examples/asset/processing/processing.rs"
|
||||
doc-scrape-examples = true
|
||||
required-features = ["filesystem_watcher"]
|
||||
required-features = ["file_watcher"]
|
||||
|
||||
[package.metadata.example.asset_processing]
|
||||
name = "Asset Processing"
|
||||
|
|
|
@ -11,8 +11,10 @@ keywords = ["bevy"]
|
|||
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
|
||||
|
||||
[features]
|
||||
filesystem_watcher = ["notify-debouncer-full"]
|
||||
file_watcher = ["notify-debouncer-full", "watch"]
|
||||
embedded_watcher = ["file_watcher"]
|
||||
multi-threaded = ["bevy_tasks/multi-threaded"]
|
||||
watch = []
|
||||
|
||||
[dependencies]
|
||||
bevy_app = { path = "../bevy_app", version = "0.12.0-dev" }
|
||||
|
|
|
@ -71,11 +71,4 @@ impl AssetReader for AndroidAssetReader {
|
|||
error!("Reading directories is not supported with the AndroidAssetReader");
|
||||
Box::pin(async move { Ok(false) })
|
||||
}
|
||||
|
||||
fn watch_for_changes(
|
||||
&self,
|
||||
_event_sender: crossbeam_channel::Sender<super::AssetSourceEvent>,
|
||||
) -> Option<Box<dyn AssetWatcher>> {
|
||||
None
|
||||
}
|
||||
}
|
||||
|
|
88
crates/bevy_asset/src/io/embedded/embedded_watcher.rs
Normal file
88
crates/bevy_asset/src/io/embedded/embedded_watcher.rs
Normal file
|
@ -0,0 +1,88 @@
|
|||
use crate::io::{
|
||||
file::{get_asset_path, get_base_path, new_asset_event_debouncer, FilesystemEventHandler},
|
||||
memory::Dir,
|
||||
AssetSourceEvent, AssetWatcher,
|
||||
};
|
||||
use bevy_log::warn;
|
||||
use bevy_utils::{Duration, HashMap};
|
||||
use notify_debouncer_full::{notify::RecommendedWatcher, Debouncer, FileIdMap};
|
||||
use parking_lot::RwLock;
|
||||
use std::{
|
||||
fs::File,
|
||||
io::{BufReader, Read},
|
||||
path::{Path, PathBuf},
|
||||
sync::Arc,
|
||||
};
|
||||
|
||||
/// A watcher for assets stored in the `embedded` asset source. Embedded assets are assets whose
|
||||
/// bytes have been embedded into the Rust binary using the [`embedded_asset`](crate::embedded_asset) macro.
|
||||
/// This watcher will watch for changes to the "source files", read the contents of changed files from the file system
|
||||
/// and overwrite the initial static bytes of the file embedded in the binary with the new dynamically loaded bytes.
|
||||
pub struct EmbeddedWatcher {
|
||||
_watcher: Debouncer<RecommendedWatcher, FileIdMap>,
|
||||
}
|
||||
|
||||
impl EmbeddedWatcher {
|
||||
pub fn new(
|
||||
dir: Dir,
|
||||
root_paths: Arc<RwLock<HashMap<PathBuf, PathBuf>>>,
|
||||
sender: crossbeam_channel::Sender<AssetSourceEvent>,
|
||||
debounce_wait_time: Duration,
|
||||
) -> Self {
|
||||
let root = get_base_path();
|
||||
let handler = EmbeddedEventHandler {
|
||||
dir,
|
||||
root: root.clone(),
|
||||
sender,
|
||||
root_paths,
|
||||
last_event: None,
|
||||
};
|
||||
let watcher = new_asset_event_debouncer(root, debounce_wait_time, handler).unwrap();
|
||||
Self { _watcher: watcher }
|
||||
}
|
||||
}
|
||||
|
||||
impl AssetWatcher for EmbeddedWatcher {}
|
||||
|
||||
/// A [`FilesystemEventHandler`] that uses [`EmbeddedAssetRegistry`](crate::io::embedded::EmbeddedAssetRegistry) to hot-reload
|
||||
/// binary-embedded Rust source files. This will read the contents of changed files from the file system and overwrite
|
||||
/// the initial static bytes from the file embedded in the binary.
|
||||
pub(crate) struct EmbeddedEventHandler {
|
||||
sender: crossbeam_channel::Sender<AssetSourceEvent>,
|
||||
root_paths: Arc<RwLock<HashMap<PathBuf, PathBuf>>>,
|
||||
root: PathBuf,
|
||||
dir: Dir,
|
||||
last_event: Option<AssetSourceEvent>,
|
||||
}
|
||||
impl FilesystemEventHandler for EmbeddedEventHandler {
|
||||
fn begin(&mut self) {
|
||||
self.last_event = None;
|
||||
}
|
||||
|
||||
fn get_path(&self, absolute_path: &Path) -> Option<(PathBuf, bool)> {
|
||||
let (local_path, is_meta) = get_asset_path(&self.root, absolute_path);
|
||||
let final_path = self.root_paths.read().get(&local_path)?.clone();
|
||||
if is_meta {
|
||||
warn!("Meta file asset hot-reloading is not supported yet: {final_path:?}");
|
||||
}
|
||||
Some((final_path, false))
|
||||
}
|
||||
|
||||
fn handle(&mut self, absolute_paths: &[PathBuf], event: AssetSourceEvent) {
|
||||
if self.last_event.as_ref() != Some(&event) {
|
||||
if let AssetSourceEvent::ModifiedAsset(path) = &event {
|
||||
if let Ok(file) = File::open(&absolute_paths[0]) {
|
||||
let mut reader = BufReader::new(file);
|
||||
let mut buffer = Vec::new();
|
||||
|
||||
// Read file into vector.
|
||||
if reader.read_to_end(&mut buffer).is_ok() {
|
||||
self.dir.insert_asset(path, buffer);
|
||||
}
|
||||
}
|
||||
}
|
||||
self.last_event = Some(event.clone());
|
||||
self.sender.send(event).unwrap();
|
||||
}
|
||||
}
|
||||
}
|
252
crates/bevy_asset/src/io/embedded/mod.rs
Normal file
252
crates/bevy_asset/src/io/embedded/mod.rs
Normal file
|
@ -0,0 +1,252 @@
|
|||
#[cfg(feature = "embedded_watcher")]
|
||||
mod embedded_watcher;
|
||||
|
||||
#[cfg(feature = "embedded_watcher")]
|
||||
pub use embedded_watcher::*;
|
||||
|
||||
use crate::io::{
|
||||
memory::{Dir, MemoryAssetReader, Value},
|
||||
AssetSource, AssetSourceBuilders,
|
||||
};
|
||||
use bevy_ecs::system::Resource;
|
||||
use std::path::{Path, PathBuf};
|
||||
|
||||
pub const EMBEDDED: &str = "embedded";
|
||||
|
||||
/// A [`Resource`] that manages "rust source files" in a virtual in memory [`Dir`], which is intended
|
||||
/// to be shared with a [`MemoryAssetReader`].
|
||||
/// Generally this should not be interacted with directly. The [`embedded_asset`] will populate this.
|
||||
///
|
||||
/// [`embedded_asset`]: crate::embedded_asset
|
||||
#[derive(Resource, Default)]
|
||||
pub struct EmbeddedAssetRegistry {
|
||||
dir: Dir,
|
||||
#[cfg(feature = "embedded_watcher")]
|
||||
root_paths: std::sync::Arc<
|
||||
parking_lot::RwLock<bevy_utils::HashMap<std::path::PathBuf, std::path::PathBuf>>,
|
||||
>,
|
||||
}
|
||||
|
||||
impl EmbeddedAssetRegistry {
|
||||
/// Inserts a new asset. `full_path` is the full path (as [`file`] would return for that file, if it was capable of
|
||||
/// running in a non-rust file). `asset_path` is the path that will be used to identify the asset in the `embedded`
|
||||
/// [`AssetSource`]. `value` is the bytes that will be returned for the asset. This can be _either_ a `&'static [u8]`
|
||||
/// or a [`Vec<u8>`].
|
||||
#[allow(unused)]
|
||||
pub fn insert_asset(&self, full_path: PathBuf, asset_path: &Path, value: impl Into<Value>) {
|
||||
#[cfg(feature = "embedded_watcher")]
|
||||
self.root_paths
|
||||
.write()
|
||||
.insert(full_path.to_owned(), asset_path.to_owned());
|
||||
self.dir.insert_asset(asset_path, value);
|
||||
}
|
||||
|
||||
/// Inserts new asset metadata. `full_path` is the full path (as [`file`] would return for that file, if it was capable of
|
||||
/// running in a non-rust file). `asset_path` is the path that will be used to identify the asset in the `embedded`
|
||||
/// [`AssetSource`]. `value` is the bytes that will be returned for the asset. This can be _either_ a `&'static [u8]`
|
||||
/// or a [`Vec<u8>`].
|
||||
#[allow(unused)]
|
||||
pub fn insert_meta(&self, full_path: &Path, asset_path: &Path, value: impl Into<Value>) {
|
||||
#[cfg(feature = "embedded_watcher")]
|
||||
self.root_paths
|
||||
.write()
|
||||
.insert(full_path.to_owned(), asset_path.to_owned());
|
||||
self.dir.insert_meta(asset_path, value);
|
||||
}
|
||||
|
||||
/// Registers a `embedded` [`AssetSource`] that uses this [`EmbeddedAssetRegistry`].
|
||||
// NOTE: unused_mut because embedded_watcher feature is the only mutable consumer of `let mut source`
|
||||
#[allow(unused_mut)]
|
||||
pub fn register_source(&self, sources: &mut AssetSourceBuilders) {
|
||||
let dir = self.dir.clone();
|
||||
let processed_dir = self.dir.clone();
|
||||
let mut source = AssetSource::build()
|
||||
.with_reader(move || Box::new(MemoryAssetReader { root: dir.clone() }))
|
||||
.with_processed_reader(move || {
|
||||
Box::new(MemoryAssetReader {
|
||||
root: processed_dir.clone(),
|
||||
})
|
||||
});
|
||||
|
||||
#[cfg(feature = "embedded_watcher")]
|
||||
{
|
||||
let root_paths = self.root_paths.clone();
|
||||
let dir = self.dir.clone();
|
||||
let processed_root_paths = self.root_paths.clone();
|
||||
let processd_dir = self.dir.clone();
|
||||
source = source
|
||||
.with_watcher(move |sender| {
|
||||
Some(Box::new(EmbeddedWatcher::new(
|
||||
dir.clone(),
|
||||
root_paths.clone(),
|
||||
sender,
|
||||
std::time::Duration::from_millis(300),
|
||||
)))
|
||||
})
|
||||
.with_processed_watcher(move |sender| {
|
||||
Some(Box::new(EmbeddedWatcher::new(
|
||||
processd_dir.clone(),
|
||||
processed_root_paths.clone(),
|
||||
sender,
|
||||
std::time::Duration::from_millis(300),
|
||||
)))
|
||||
});
|
||||
}
|
||||
sources.insert(EMBEDDED, source);
|
||||
}
|
||||
}
|
||||
|
||||
/// Returns the [`Path`] for a given `embedded` asset.
|
||||
/// This is used internally by [`embedded_asset`] and can be used to get a [`Path`]
|
||||
/// that matches the [`AssetPath`](crate::AssetPath) used by that asset.
|
||||
///
|
||||
/// [`embedded_asset`]: crate::embedded_asset
|
||||
#[macro_export]
|
||||
macro_rules! embedded_path {
|
||||
($path_str: expr) => {{
|
||||
embedded_path!("/src/", $path_str)
|
||||
}};
|
||||
|
||||
($source_path: expr, $path_str: expr) => {{
|
||||
let crate_name = module_path!().split(':').next().unwrap();
|
||||
let after_src = file!().split($source_path).nth(1).unwrap();
|
||||
let file_path = std::path::Path::new(after_src)
|
||||
.parent()
|
||||
.unwrap()
|
||||
.join($path_str);
|
||||
std::path::Path::new(crate_name).join(file_path)
|
||||
}};
|
||||
}
|
||||
|
||||
/// Creates a new `embedded` asset by embedding the bytes of the given path into the current binary
|
||||
/// and registering those bytes with the `embedded` [`AssetSource`].
|
||||
///
|
||||
/// This accepts the current [`App`](bevy_app::App) as the first parameter and a path `&str` (relative to the current file) as the second.
|
||||
///
|
||||
/// By default this will generate an [`AssetPath`] using the following rules:
|
||||
///
|
||||
/// 1. Search for the first `$crate_name/src/` in the path and trim to the path past that point.
|
||||
/// 2. Re-add the current `$crate_name` to the front of the path
|
||||
///
|
||||
/// For example, consider the following file structure in the theoretical `bevy_rock` crate, which provides a Bevy [`Plugin`](bevy_app::Plugin)
|
||||
/// that renders fancy rocks for scenes.
|
||||
///
|
||||
/// * `bevy_rock`
|
||||
/// * `src`
|
||||
/// * `render`
|
||||
/// * `rock.wgsl`
|
||||
/// * `mod.rs`
|
||||
/// * `lib.rs`
|
||||
/// * `Cargo.toml`
|
||||
///
|
||||
/// `rock.wgsl` is a WGSL shader asset that the `bevy_rock` plugin author wants to bundle with their crate. They invoke the following
|
||||
/// in `bevy_rock/src/render/mod.rs`:
|
||||
///
|
||||
/// `embedded_asset!(app, "rock.wgsl")`
|
||||
///
|
||||
/// `rock.wgsl` can now be loaded by the [`AssetServer`](crate::AssetServer) with the following path:
|
||||
///
|
||||
/// ```no_run
|
||||
/// # use bevy_asset::{Asset, AssetServer};
|
||||
/// # use bevy_reflect::TypePath;
|
||||
/// # let asset_server: AssetServer = panic!();
|
||||
/// #[derive(Asset, TypePath)]
|
||||
/// # struct Shader;
|
||||
/// let shader = asset_server.load::<Shader>("embedded://bevy_rock/render/rock.wgsl");
|
||||
/// ```
|
||||
///
|
||||
/// Some things to note in the path:
|
||||
/// 1. The non-default `embedded:://` [`AssetSource`]
|
||||
/// 2. `src` is trimmed from the path
|
||||
///
|
||||
/// The default behavior also works for cargo workspaces. Pretend the `bevy_rock` crate now exists in a larger workspace in
|
||||
/// `$SOME_WORKSPACE/crates/bevy_rock`. The asset path would remain the same, because [`embedded_asset`] searches for the
|
||||
/// _first instance_ of `bevy_rock/src` in the path.
|
||||
///
|
||||
/// For most "standard crate structures" the default works just fine. But for some niche cases (such as cargo examples),
|
||||
/// the `src` path will not be present. You can override this behavior by adding it as the second argument to [`embedded_asset`]:
|
||||
///
|
||||
/// `embedded_asset!(app, "/examples/rock_stuff/", "rock.wgsl")`
|
||||
///
|
||||
/// When there are three arguments, the second argument will replace the default `/src/` value. Note that these two are
|
||||
/// equivalent:
|
||||
///
|
||||
/// `embedded_asset!(app, "rock.wgsl")`
|
||||
/// `embedded_asset!(app, "/src/", "rock.wgsl")`
|
||||
///
|
||||
/// This macro uses the [`include_bytes`] macro internally and _will not_ reallocate the bytes.
|
||||
/// Generally the [`AssetPath`] generated will be predictable, but if your asset isn't
|
||||
/// available for some reason, you can use the [`embedded_path`] macro to debug.
|
||||
///
|
||||
/// Hot-reloading `embedded` assets is supported. Just enable the `embedded_watcher` cargo feature.
|
||||
///
|
||||
/// [`AssetPath`]: crate::AssetPath
|
||||
/// [`embedded_asset`]: crate::embedded_asset
|
||||
/// [`embedded_path`]: crate::embedded_path
|
||||
#[macro_export]
|
||||
macro_rules! embedded_asset {
|
||||
($app: ident, $path: expr) => {{
|
||||
embedded_asset!($app, "/src/", $path)
|
||||
}};
|
||||
|
||||
($app: ident, $source_path: expr, $path: expr) => {{
|
||||
let mut embedded = $app
|
||||
.world
|
||||
.resource_mut::<$crate::io::embedded::EmbeddedAssetRegistry>();
|
||||
let path = $crate::embedded_path!($source_path, $path);
|
||||
#[cfg(feature = "embedded_watcher")]
|
||||
let full_path = std::path::Path::new(file!()).parent().unwrap().join($path);
|
||||
#[cfg(not(feature = "embedded_watcher"))]
|
||||
let full_path = std::path::PathBuf::new();
|
||||
embedded.insert_asset(full_path, &path, include_bytes!($path));
|
||||
}};
|
||||
}
|
||||
|
||||
/// Loads an "internal" asset by embedding the string stored in the given `path_str` and associates it with the given handle.
|
||||
#[macro_export]
|
||||
macro_rules! load_internal_asset {
|
||||
($app: ident, $handle: expr, $path_str: expr, $loader: expr) => {{
|
||||
let mut assets = $app.world.resource_mut::<$crate::Assets<_>>();
|
||||
assets.insert($handle, ($loader)(
|
||||
include_str!($path_str),
|
||||
std::path::Path::new(file!())
|
||||
.parent()
|
||||
.unwrap()
|
||||
.join($path_str)
|
||||
.to_string_lossy()
|
||||
));
|
||||
}};
|
||||
// we can't support params without variadic arguments, so internal assets with additional params can't be hot-reloaded
|
||||
($app: ident, $handle: ident, $path_str: expr, $loader: expr $(, $param:expr)+) => {{
|
||||
let mut assets = $app.world.resource_mut::<$crate::Assets<_>>();
|
||||
assets.insert($handle, ($loader)(
|
||||
include_str!($path_str),
|
||||
std::path::Path::new(file!())
|
||||
.parent()
|
||||
.unwrap()
|
||||
.join($path_str)
|
||||
.to_string_lossy(),
|
||||
$($param),+
|
||||
));
|
||||
}};
|
||||
}
|
||||
|
||||
/// Loads an "internal" binary asset by embedding the bytes stored in the given `path_str` and associates it with the given handle.
|
||||
#[macro_export]
|
||||
macro_rules! load_internal_binary_asset {
|
||||
($app: ident, $handle: expr, $path_str: expr, $loader: expr) => {{
|
||||
let mut assets = $app.world.resource_mut::<$crate::Assets<_>>();
|
||||
assets.insert(
|
||||
$handle,
|
||||
($loader)(
|
||||
include_bytes!($path_str).as_ref(),
|
||||
std::path::Path::new(file!())
|
||||
.parent()
|
||||
.unwrap()
|
||||
.join($path_str)
|
||||
.to_string_lossy()
|
||||
.into(),
|
||||
),
|
||||
);
|
||||
}};
|
||||
}
|
|
@ -13,6 +13,11 @@ use notify_debouncer_full::{
|
|||
};
|
||||
use std::path::{Path, PathBuf};
|
||||
|
||||
/// An [`AssetWatcher`] that watches the filesystem for changes to asset files in a given root folder and emits [`AssetSourceEvent`]
|
||||
/// for each relevant change. This uses [`notify_debouncer_full`] to retrieve "debounced" filesystem events.
|
||||
/// "Debouncing" defines a time window to hold on to events and then removes duplicate events that fall into this window.
|
||||
/// This introduces a small delay in processing events, but it helps reduce event duplicates. A small delay is also necessary
|
||||
/// on some systems to avoid processing a change event before it has actually been applied.
|
||||
pub struct FileWatcher {
|
||||
_watcher: Debouncer<RecommendedWatcher, FileIdMap>,
|
||||
}
|
||||
|
@ -23,144 +28,17 @@ impl FileWatcher {
|
|||
sender: Sender<AssetSourceEvent>,
|
||||
debounce_wait_time: Duration,
|
||||
) -> Result<Self, notify::Error> {
|
||||
let owned_root = root.clone();
|
||||
let mut debouncer = new_debouncer(
|
||||
let root = super::get_base_path().join(root);
|
||||
let watcher = new_asset_event_debouncer(
|
||||
root.clone(),
|
||||
debounce_wait_time,
|
||||
None,
|
||||
move |result: DebounceEventResult| {
|
||||
match result {
|
||||
Ok(events) => {
|
||||
for event in events.iter() {
|
||||
match event.kind {
|
||||
notify::EventKind::Create(CreateKind::File) => {
|
||||
let (path, is_meta) =
|
||||
get_asset_path(&owned_root, &event.paths[0]);
|
||||
if is_meta {
|
||||
sender.send(AssetSourceEvent::AddedMeta(path)).unwrap();
|
||||
} else {
|
||||
sender.send(AssetSourceEvent::AddedAsset(path)).unwrap();
|
||||
}
|
||||
}
|
||||
notify::EventKind::Create(CreateKind::Folder) => {
|
||||
let (path, _) = get_asset_path(&owned_root, &event.paths[0]);
|
||||
sender.send(AssetSourceEvent::AddedFolder(path)).unwrap();
|
||||
}
|
||||
notify::EventKind::Access(AccessKind::Close(AccessMode::Write)) => {
|
||||
let (path, is_meta) =
|
||||
get_asset_path(&owned_root, &event.paths[0]);
|
||||
if is_meta {
|
||||
sender.send(AssetSourceEvent::ModifiedMeta(path)).unwrap();
|
||||
} else {
|
||||
sender.send(AssetSourceEvent::ModifiedAsset(path)).unwrap();
|
||||
}
|
||||
}
|
||||
notify::EventKind::Remove(RemoveKind::Any) |
|
||||
// Because this is debounced over a reasonable period of time, "From" events are assumed to be "dangling" without
|
||||
// a follow up "To" event. Without debouncing, "From" -> "To" -> "Both" events are emitted for renames.
|
||||
// If a From is dangling, it is assumed to be "removed" from the context of the asset system.
|
||||
notify::EventKind::Modify(ModifyKind::Name(RenameMode::From)) => {
|
||||
let (path, is_meta) =
|
||||
get_asset_path(&owned_root, &event.paths[0]);
|
||||
sender
|
||||
.send(AssetSourceEvent::RemovedUnknown { path, is_meta })
|
||||
.unwrap();
|
||||
}
|
||||
notify::EventKind::Create(CreateKind::Any)
|
||||
| notify::EventKind::Modify(ModifyKind::Name(RenameMode::To)) => {
|
||||
let (path, is_meta) =
|
||||
get_asset_path(&owned_root, &event.paths[0]);
|
||||
let event = if event.paths[0].is_dir() {
|
||||
AssetSourceEvent::AddedFolder(path)
|
||||
} else if is_meta {
|
||||
AssetSourceEvent::AddedMeta(path)
|
||||
} else {
|
||||
AssetSourceEvent::AddedAsset(path)
|
||||
};
|
||||
sender.send(event).unwrap();
|
||||
}
|
||||
notify::EventKind::Modify(ModifyKind::Name(RenameMode::Both)) => {
|
||||
let (old_path, old_is_meta) =
|
||||
get_asset_path(&owned_root, &event.paths[0]);
|
||||
let (new_path, new_is_meta) =
|
||||
get_asset_path(&owned_root, &event.paths[1]);
|
||||
// only the new "real" path is considered a directory
|
||||
if event.paths[1].is_dir() {
|
||||
sender
|
||||
.send(AssetSourceEvent::RenamedFolder {
|
||||
old: old_path,
|
||||
new: new_path,
|
||||
})
|
||||
.unwrap();
|
||||
} else {
|
||||
match (old_is_meta, new_is_meta) {
|
||||
(true, true) => {
|
||||
sender
|
||||
.send(AssetSourceEvent::RenamedMeta {
|
||||
old: old_path,
|
||||
new: new_path,
|
||||
})
|
||||
.unwrap();
|
||||
}
|
||||
(false, false) => {
|
||||
sender
|
||||
.send(AssetSourceEvent::RenamedAsset {
|
||||
old: old_path,
|
||||
new: new_path,
|
||||
})
|
||||
.unwrap();
|
||||
}
|
||||
(true, false) => {
|
||||
error!(
|
||||
"Asset metafile {old_path:?} was changed to asset file {new_path:?}, which is not supported. Try restarting your app to see if configuration is still valid"
|
||||
);
|
||||
}
|
||||
(false, true) => {
|
||||
error!(
|
||||
"Asset file {old_path:?} was changed to meta file {new_path:?}, which is not supported. Try restarting your app to see if configuration is still valid"
|
||||
);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
notify::EventKind::Modify(_) => {
|
||||
let (path, is_meta) =
|
||||
get_asset_path(&owned_root, &event.paths[0]);
|
||||
if event.paths[0].is_dir() {
|
||||
// modified folder means nothing in this case
|
||||
} else if is_meta {
|
||||
sender.send(AssetSourceEvent::ModifiedMeta(path)).unwrap();
|
||||
} else {
|
||||
sender.send(AssetSourceEvent::ModifiedAsset(path)).unwrap();
|
||||
};
|
||||
}
|
||||
notify::EventKind::Remove(RemoveKind::File) => {
|
||||
let (path, is_meta) =
|
||||
get_asset_path(&owned_root, &event.paths[0]);
|
||||
if is_meta {
|
||||
sender.send(AssetSourceEvent::RemovedMeta(path)).unwrap();
|
||||
} else {
|
||||
sender.send(AssetSourceEvent::RemovedAsset(path)).unwrap();
|
||||
}
|
||||
}
|
||||
notify::EventKind::Remove(RemoveKind::Folder) => {
|
||||
let (path, _) = get_asset_path(&owned_root, &event.paths[0]);
|
||||
sender.send(AssetSourceEvent::RemovedFolder(path)).unwrap();
|
||||
}
|
||||
_ => {}
|
||||
}
|
||||
}
|
||||
}
|
||||
Err(errors) => errors.iter().for_each(|error| {
|
||||
error!("Encountered a filesystem watcher error {error:?}");
|
||||
}),
|
||||
}
|
||||
FileEventHandler {
|
||||
root,
|
||||
sender,
|
||||
last_event: None,
|
||||
},
|
||||
)?;
|
||||
debouncer.watcher().watch(&root, RecursiveMode::Recursive)?;
|
||||
debouncer.cache().add_root(&root, RecursiveMode::Recursive);
|
||||
Ok(Self {
|
||||
_watcher: debouncer,
|
||||
})
|
||||
Ok(FileWatcher { _watcher: watcher })
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -179,3 +57,219 @@ pub(crate) fn get_asset_path(root: &Path, absolute_path: &Path) -> (PathBuf, boo
|
|||
};
|
||||
(asset_path, is_meta)
|
||||
}
|
||||
|
||||
/// This is a bit more abstracted than it normally would be because we want to try _very hard_ not to duplicate this
|
||||
/// event management logic across filesystem-driven [`AssetWatcher`] impls. Each operating system / platform behaves
|
||||
/// a little differently and this is the result of a delicate balancing act that we should only perform once.
|
||||
pub(crate) fn new_asset_event_debouncer(
|
||||
root: PathBuf,
|
||||
debounce_wait_time: Duration,
|
||||
mut handler: impl FilesystemEventHandler,
|
||||
) -> Result<Debouncer<RecommendedWatcher, FileIdMap>, notify::Error> {
|
||||
let root = super::get_base_path().join(root);
|
||||
let mut debouncer = new_debouncer(
|
||||
debounce_wait_time,
|
||||
None,
|
||||
move |result: DebounceEventResult| {
|
||||
match result {
|
||||
Ok(events) => {
|
||||
handler.begin();
|
||||
for event in events.iter() {
|
||||
match event.kind {
|
||||
notify::EventKind::Create(CreateKind::File) => {
|
||||
if let Some((path, is_meta)) = handler.get_path(&event.paths[0]) {
|
||||
if is_meta {
|
||||
handler.handle(
|
||||
&event.paths,
|
||||
AssetSourceEvent::AddedMeta(path),
|
||||
);
|
||||
} else {
|
||||
handler.handle(
|
||||
&event.paths,
|
||||
AssetSourceEvent::AddedAsset(path),
|
||||
);
|
||||
}
|
||||
}
|
||||
}
|
||||
notify::EventKind::Create(CreateKind::Folder) => {
|
||||
if let Some((path, _)) = handler.get_path(&event.paths[0]) {
|
||||
handler
|
||||
.handle(&event.paths, AssetSourceEvent::AddedFolder(path));
|
||||
}
|
||||
}
|
||||
notify::EventKind::Access(AccessKind::Close(AccessMode::Write)) => {
|
||||
if let Some((path, is_meta)) = handler.get_path(&event.paths[0]) {
|
||||
if is_meta {
|
||||
handler.handle(
|
||||
&event.paths,
|
||||
AssetSourceEvent::ModifiedMeta(path),
|
||||
);
|
||||
} else {
|
||||
handler.handle(
|
||||
&event.paths,
|
||||
AssetSourceEvent::ModifiedAsset(path),
|
||||
);
|
||||
}
|
||||
}
|
||||
}
|
||||
// Because this is debounced over a reasonable period of time, Modify(ModifyKind::Name(RenameMode::From)
|
||||
// events are assumed to be "dangling" without a follow up "To" event. Without debouncing, "From" -> "To" -> "Both"
|
||||
// events are emitted for renames. If a From is dangling, it is assumed to be "removed" from the context of the asset
|
||||
// system.
|
||||
notify::EventKind::Remove(RemoveKind::Any)
|
||||
| notify::EventKind::Modify(ModifyKind::Name(RenameMode::From)) => {
|
||||
if let Some((path, is_meta)) = handler.get_path(&event.paths[0]) {
|
||||
handler.handle(
|
||||
&event.paths,
|
||||
AssetSourceEvent::RemovedUnknown { path, is_meta },
|
||||
);
|
||||
}
|
||||
}
|
||||
notify::EventKind::Create(CreateKind::Any)
|
||||
| notify::EventKind::Modify(ModifyKind::Name(RenameMode::To)) => {
|
||||
if let Some((path, is_meta)) = handler.get_path(&event.paths[0]) {
|
||||
let asset_event = if event.paths[0].is_dir() {
|
||||
AssetSourceEvent::AddedFolder(path)
|
||||
} else if is_meta {
|
||||
AssetSourceEvent::AddedMeta(path)
|
||||
} else {
|
||||
AssetSourceEvent::AddedAsset(path)
|
||||
};
|
||||
handler.handle(&event.paths, asset_event);
|
||||
}
|
||||
}
|
||||
notify::EventKind::Modify(ModifyKind::Name(RenameMode::Both)) => {
|
||||
let Some((old_path, old_is_meta)) =
|
||||
handler.get_path(&event.paths[0])
|
||||
else {
|
||||
continue;
|
||||
};
|
||||
let Some((new_path, new_is_meta)) =
|
||||
handler.get_path(&event.paths[1])
|
||||
else {
|
||||
continue;
|
||||
};
|
||||
// only the new "real" path is considered a directory
|
||||
if event.paths[1].is_dir() {
|
||||
handler.handle(
|
||||
&event.paths,
|
||||
AssetSourceEvent::RenamedFolder {
|
||||
old: old_path,
|
||||
new: new_path,
|
||||
},
|
||||
);
|
||||
} else {
|
||||
match (old_is_meta, new_is_meta) {
|
||||
(true, true) => {
|
||||
handler.handle(
|
||||
&event.paths,
|
||||
AssetSourceEvent::RenamedMeta {
|
||||
old: old_path,
|
||||
new: new_path,
|
||||
},
|
||||
);
|
||||
}
|
||||
(false, false) => {
|
||||
handler.handle(
|
||||
&event.paths,
|
||||
AssetSourceEvent::RenamedAsset {
|
||||
old: old_path,
|
||||
new: new_path,
|
||||
},
|
||||
);
|
||||
}
|
||||
(true, false) => {
|
||||
error!(
|
||||
"Asset metafile {old_path:?} was changed to asset file {new_path:?}, which is not supported. Try restarting your app to see if configuration is still valid"
|
||||
);
|
||||
}
|
||||
(false, true) => {
|
||||
error!(
|
||||
"Asset file {old_path:?} was changed to meta file {new_path:?}, which is not supported. Try restarting your app to see if configuration is still valid"
|
||||
);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
notify::EventKind::Modify(_) => {
|
||||
let Some((path, is_meta)) = handler.get_path(&event.paths[0])
|
||||
else {
|
||||
continue;
|
||||
};
|
||||
if event.paths[0].is_dir() {
|
||||
// modified folder means nothing in this case
|
||||
} else if is_meta {
|
||||
handler
|
||||
.handle(&event.paths, AssetSourceEvent::ModifiedMeta(path));
|
||||
} else {
|
||||
handler.handle(
|
||||
&event.paths,
|
||||
AssetSourceEvent::ModifiedAsset(path),
|
||||
);
|
||||
};
|
||||
}
|
||||
notify::EventKind::Remove(RemoveKind::File) => {
|
||||
let Some((path, is_meta)) = handler.get_path(&event.paths[0])
|
||||
else {
|
||||
continue;
|
||||
};
|
||||
if is_meta {
|
||||
handler
|
||||
.handle(&event.paths, AssetSourceEvent::RemovedMeta(path));
|
||||
} else {
|
||||
handler
|
||||
.handle(&event.paths, AssetSourceEvent::RemovedAsset(path));
|
||||
}
|
||||
}
|
||||
notify::EventKind::Remove(RemoveKind::Folder) => {
|
||||
let Some((path, _)) = handler.get_path(&event.paths[0]) else {
|
||||
continue;
|
||||
};
|
||||
handler.handle(&event.paths, AssetSourceEvent::RemovedFolder(path));
|
||||
}
|
||||
_ => {}
|
||||
}
|
||||
}
|
||||
}
|
||||
Err(errors) => errors.iter().for_each(|error| {
|
||||
error!("Encountered a filesystem watcher error {error:?}");
|
||||
}),
|
||||
}
|
||||
},
|
||||
)?;
|
||||
debouncer.watcher().watch(&root, RecursiveMode::Recursive)?;
|
||||
debouncer.cache().add_root(&root, RecursiveMode::Recursive);
|
||||
Ok(debouncer)
|
||||
}
|
||||
|
||||
pub(crate) struct FileEventHandler {
|
||||
sender: crossbeam_channel::Sender<AssetSourceEvent>,
|
||||
root: PathBuf,
|
||||
last_event: Option<AssetSourceEvent>,
|
||||
}
|
||||
|
||||
impl FilesystemEventHandler for FileEventHandler {
|
||||
fn begin(&mut self) {
|
||||
self.last_event = None;
|
||||
}
|
||||
fn get_path(&self, absolute_path: &Path) -> Option<(PathBuf, bool)> {
|
||||
Some(get_asset_path(&self.root, absolute_path))
|
||||
}
|
||||
|
||||
fn handle(&mut self, _absolute_paths: &[PathBuf], event: AssetSourceEvent) {
|
||||
if self.last_event.as_ref() != Some(&event) {
|
||||
self.last_event = Some(event.clone());
|
||||
self.sender.send(event).unwrap();
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
pub(crate) trait FilesystemEventHandler: Send + Sync + 'static {
|
||||
/// Called each time a set of debounced events is processed
|
||||
fn begin(&mut self);
|
||||
/// Returns an actual asset path (if one exists for the given `absolute_path`), as well as a [`bool`] that is
|
||||
/// true if the `absolute_path` corresponds to a meta file.
|
||||
fn get_path(&self, absolute_path: &Path) -> Option<(PathBuf, bool)>;
|
||||
/// Handle the given event
|
||||
fn handle(&mut self, absolute_paths: &[PathBuf], event: AssetSourceEvent);
|
||||
}
|
||||
|
|
|
@ -1,9 +1,11 @@
|
|||
#[cfg(feature = "filesystem_watcher")]
|
||||
#[cfg(feature = "file_watcher")]
|
||||
mod file_watcher;
|
||||
#[cfg(feature = "file_watcher")]
|
||||
pub use file_watcher::*;
|
||||
|
||||
use crate::io::{
|
||||
get_meta_path, AssetReader, AssetReaderError, AssetWatcher, AssetWriter, AssetWriterError,
|
||||
PathStream, Reader, Writer,
|
||||
get_meta_path, AssetReader, AssetReaderError, AssetWriter, AssetWriterError, PathStream,
|
||||
Reader, Writer,
|
||||
};
|
||||
use async_fs::{read_dir, File};
|
||||
use bevy_utils::BoxedFuture;
|
||||
|
@ -164,23 +166,6 @@ impl AssetReader for FileAssetReader {
|
|||
Ok(metadata.file_type().is_dir())
|
||||
})
|
||||
}
|
||||
|
||||
fn watch_for_changes(
|
||||
&self,
|
||||
_event_sender: crossbeam_channel::Sender<super::AssetSourceEvent>,
|
||||
) -> Option<Box<dyn AssetWatcher>> {
|
||||
#[cfg(feature = "filesystem_watcher")]
|
||||
return Some(Box::new(
|
||||
file_watcher::FileWatcher::new(
|
||||
self.root_path.clone(),
|
||||
_event_sender,
|
||||
std::time::Duration::from_millis(300),
|
||||
)
|
||||
.unwrap(),
|
||||
));
|
||||
#[cfg(not(feature = "filesystem_watcher"))]
|
||||
return None;
|
||||
}
|
||||
}
|
||||
|
||||
pub struct FileAssetWriter {
|
||||
|
|
|
@ -96,11 +96,4 @@ impl<R: AssetReader> AssetReader for GatedReader<R> {
|
|||
) -> BoxedFuture<'a, std::result::Result<bool, AssetReaderError>> {
|
||||
self.reader.is_directory(path)
|
||||
}
|
||||
|
||||
fn watch_for_changes(
|
||||
&self,
|
||||
event_sender: Sender<super::AssetSourceEvent>,
|
||||
) -> Option<Box<dyn super::AssetWatcher>> {
|
||||
self.reader.watch_for_changes(event_sender)
|
||||
}
|
||||
}
|
||||
|
|
|
@ -40,25 +40,31 @@ impl Dir {
|
|||
self.insert_meta(path, asset.as_bytes().to_vec());
|
||||
}
|
||||
|
||||
pub fn insert_asset(&self, path: &Path, asset: Vec<u8>) {
|
||||
pub fn insert_asset(&self, path: &Path, value: impl Into<Value>) {
|
||||
let mut dir = self.clone();
|
||||
if let Some(parent) = path.parent() {
|
||||
dir = self.get_or_insert_dir(parent);
|
||||
}
|
||||
dir.0.write().assets.insert(
|
||||
path.file_name().unwrap().to_string_lossy().to_string(),
|
||||
Data(Arc::new((asset, path.to_owned()))),
|
||||
Data {
|
||||
value: value.into(),
|
||||
path: path.to_owned(),
|
||||
},
|
||||
);
|
||||
}
|
||||
|
||||
pub fn insert_meta(&self, path: &Path, asset: Vec<u8>) {
|
||||
pub fn insert_meta(&self, path: &Path, value: impl Into<Value>) {
|
||||
let mut dir = self.clone();
|
||||
if let Some(parent) = path.parent() {
|
||||
dir = self.get_or_insert_dir(parent);
|
||||
}
|
||||
dir.0.write().metadata.insert(
|
||||
path.file_name().unwrap().to_string_lossy().to_string(),
|
||||
Data(Arc::new((asset, path.to_owned()))),
|
||||
Data {
|
||||
value: value.into(),
|
||||
path: path.to_owned(),
|
||||
},
|
||||
);
|
||||
}
|
||||
|
||||
|
@ -117,11 +123,16 @@ impl Dir {
|
|||
pub struct DirStream {
|
||||
dir: Dir,
|
||||
index: usize,
|
||||
dir_index: usize,
|
||||
}
|
||||
|
||||
impl DirStream {
|
||||
fn new(dir: Dir) -> Self {
|
||||
Self { dir, index: 0 }
|
||||
Self {
|
||||
dir,
|
||||
index: 0,
|
||||
dir_index: 0,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -133,10 +144,17 @@ impl Stream for DirStream {
|
|||
_cx: &mut std::task::Context<'_>,
|
||||
) -> Poll<Option<Self::Item>> {
|
||||
let this = self.get_mut();
|
||||
let index = this.index;
|
||||
this.index += 1;
|
||||
let dir = this.dir.0.read();
|
||||
Poll::Ready(dir.assets.values().nth(index).map(|d| d.path().to_owned()))
|
||||
|
||||
let dir_index = this.dir_index;
|
||||
if let Some(dir_path) = dir.dirs.keys().nth(dir_index).map(|d| dir.path.join(d)) {
|
||||
this.dir_index += 1;
|
||||
Poll::Ready(Some(dir_path))
|
||||
} else {
|
||||
let index = this.index;
|
||||
this.index += 1;
|
||||
Poll::Ready(dir.assets.values().nth(index).map(|d| d.path().to_owned()))
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -149,14 +167,45 @@ pub struct MemoryAssetReader {
|
|||
|
||||
/// Asset data stored in a [`Dir`].
|
||||
#[derive(Clone, Debug)]
|
||||
pub struct Data(Arc<(Vec<u8>, PathBuf)>);
|
||||
pub struct Data {
|
||||
path: PathBuf,
|
||||
value: Value,
|
||||
}
|
||||
|
||||
/// Stores either an allocated vec of bytes or a static array of bytes.
|
||||
#[derive(Clone, Debug)]
|
||||
pub enum Value {
|
||||
Vec(Arc<Vec<u8>>),
|
||||
Static(&'static [u8]),
|
||||
}
|
||||
|
||||
impl Data {
|
||||
fn path(&self) -> &Path {
|
||||
&self.0 .1
|
||||
&self.path
|
||||
}
|
||||
fn data(&self) -> &[u8] {
|
||||
&self.0 .0
|
||||
fn value(&self) -> &[u8] {
|
||||
match &self.value {
|
||||
Value::Vec(vec) => vec,
|
||||
Value::Static(value) => value,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl From<Vec<u8>> for Value {
|
||||
fn from(value: Vec<u8>) -> Self {
|
||||
Self::Vec(Arc::new(value))
|
||||
}
|
||||
}
|
||||
|
||||
impl From<&'static [u8]> for Value {
|
||||
fn from(value: &'static [u8]) -> Self {
|
||||
Self::Static(value)
|
||||
}
|
||||
}
|
||||
|
||||
impl<const N: usize> From<&'static [u8; N]> for Value {
|
||||
fn from(value: &'static [u8; N]) -> Self {
|
||||
Self::Static(value)
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -171,10 +220,11 @@ impl AsyncRead for DataReader {
|
|||
cx: &mut std::task::Context<'_>,
|
||||
buf: &mut [u8],
|
||||
) -> std::task::Poll<futures_io::Result<usize>> {
|
||||
if self.bytes_read >= self.data.data().len() {
|
||||
if self.bytes_read >= self.data.value().len() {
|
||||
Poll::Ready(Ok(0))
|
||||
} else {
|
||||
let n = ready!(Pin::new(&mut &self.data.data()[self.bytes_read..]).poll_read(cx, buf))?;
|
||||
let n =
|
||||
ready!(Pin::new(&mut &self.data.value()[self.bytes_read..]).poll_read(cx, buf))?;
|
||||
self.bytes_read += n;
|
||||
Poll::Ready(Ok(n))
|
||||
}
|
||||
|
@ -196,7 +246,7 @@ impl AssetReader for MemoryAssetReader {
|
|||
});
|
||||
reader
|
||||
})
|
||||
.ok_or(AssetReaderError::NotFound(PathBuf::new()))
|
||||
.ok_or_else(|| AssetReaderError::NotFound(path.to_path_buf()))
|
||||
})
|
||||
}
|
||||
|
||||
|
@ -214,7 +264,7 @@ impl AssetReader for MemoryAssetReader {
|
|||
});
|
||||
reader
|
||||
})
|
||||
.ok_or(AssetReaderError::NotFound(PathBuf::new()))
|
||||
.ok_or_else(|| AssetReaderError::NotFound(path.to_path_buf()))
|
||||
})
|
||||
}
|
||||
|
||||
|
@ -229,7 +279,7 @@ impl AssetReader for MemoryAssetReader {
|
|||
let stream: Box<PathStream> = Box::new(DirStream::new(dir));
|
||||
stream
|
||||
})
|
||||
.ok_or(AssetReaderError::NotFound(PathBuf::new()))
|
||||
.ok_or_else(|| AssetReaderError::NotFound(path.to_path_buf()))
|
||||
})
|
||||
}
|
||||
|
||||
|
@ -239,13 +289,6 @@ impl AssetReader for MemoryAssetReader {
|
|||
) -> BoxedFuture<'a, std::result::Result<bool, AssetReaderError>> {
|
||||
Box::pin(async move { Ok(self.root.get_dir(path).is_some()) })
|
||||
}
|
||||
|
||||
fn watch_for_changes(
|
||||
&self,
|
||||
_event_sender: crossbeam_channel::Sender<super::AssetSourceEvent>,
|
||||
) -> Option<Box<dyn super::AssetWatcher>> {
|
||||
None
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
|
@ -263,12 +306,12 @@ pub mod test {
|
|||
dir.insert_asset(a_path, a_data.clone());
|
||||
let asset = dir.get_asset(a_path).unwrap();
|
||||
assert_eq!(asset.path(), a_path);
|
||||
assert_eq!(asset.data(), a_data);
|
||||
assert_eq!(asset.value(), a_data);
|
||||
|
||||
dir.insert_meta(a_path, a_meta.clone());
|
||||
let meta = dir.get_metadata(a_path).unwrap();
|
||||
assert_eq!(meta.path(), a_path);
|
||||
assert_eq!(meta.data(), a_meta);
|
||||
assert_eq!(meta.value(), a_meta);
|
||||
|
||||
let b_path = Path::new("x/y/b.txt");
|
||||
let b_data = "b".as_bytes().to_vec();
|
||||
|
@ -278,10 +321,10 @@ pub mod test {
|
|||
|
||||
let asset = dir.get_asset(b_path).unwrap();
|
||||
assert_eq!(asset.path(), b_path);
|
||||
assert_eq!(asset.data(), b_data);
|
||||
assert_eq!(asset.value(), b_data);
|
||||
|
||||
let meta = dir.get_metadata(b_path).unwrap();
|
||||
assert_eq!(meta.path(), b_path);
|
||||
assert_eq!(meta.data(), b_meta);
|
||||
assert_eq!(meta.value(), b_meta);
|
||||
}
|
||||
}
|
||||
|
|
|
@ -1,5 +1,6 @@
|
|||
#[cfg(target_os = "android")]
|
||||
pub mod android;
|
||||
pub mod embedded;
|
||||
#[cfg(not(target_arch = "wasm32"))]
|
||||
pub mod file;
|
||||
pub mod gated;
|
||||
|
@ -8,13 +9,12 @@ pub mod processor_gated;
|
|||
#[cfg(target_arch = "wasm32")]
|
||||
pub mod wasm;
|
||||
|
||||
mod provider;
|
||||
mod source;
|
||||
|
||||
pub use futures_lite::{AsyncReadExt, AsyncWriteExt};
|
||||
pub use provider::*;
|
||||
pub use source::*;
|
||||
|
||||
use bevy_utils::BoxedFuture;
|
||||
use crossbeam_channel::Sender;
|
||||
use futures_io::{AsyncRead, AsyncWrite};
|
||||
use futures_lite::{ready, Stream};
|
||||
use std::{
|
||||
|
@ -65,13 +65,6 @@ pub trait AssetReader: Send + Sync + 'static {
|
|||
path: &'a Path,
|
||||
) -> BoxedFuture<'a, Result<bool, AssetReaderError>>;
|
||||
|
||||
/// Returns an Asset watcher that will send events on the given channel.
|
||||
/// If this reader does not support watching for changes, this will return [`None`].
|
||||
fn watch_for_changes(
|
||||
&self,
|
||||
event_sender: Sender<AssetSourceEvent>,
|
||||
) -> Option<Box<dyn AssetWatcher>>;
|
||||
|
||||
/// Reads asset metadata bytes at the given `path` into a [`Vec<u8>`]. This is a convenience
|
||||
/// function that wraps [`AssetReader::read_meta`] by default.
|
||||
fn read_meta_bytes<'a>(
|
||||
|
@ -179,7 +172,7 @@ pub trait AssetWriter: Send + Sync + 'static {
|
|||
}
|
||||
|
||||
/// An "asset source change event" that occurs whenever asset (or asset metadata) is created/added/removed
|
||||
#[derive(Clone, Debug)]
|
||||
#[derive(Clone, Debug, PartialEq, Eq)]
|
||||
pub enum AssetSourceEvent {
|
||||
/// An asset at this path was added.
|
||||
AddedAsset(PathBuf),
|
||||
|
@ -218,8 +211,6 @@ pub enum AssetSourceEvent {
|
|||
|
||||
/// A handle to an "asset watcher" process, that will listen for and emit [`AssetSourceEvent`] values for as long as
|
||||
/// [`AssetWatcher`] has not been dropped.
|
||||
///
|
||||
/// See [`AssetReader::watch_for_changes`].
|
||||
pub trait AssetWatcher: Send + Sync + 'static {}
|
||||
|
||||
/// An [`AsyncRead`] implementation capable of reading a [`Vec<u8>`].
|
||||
|
|
|
@ -1,5 +1,5 @@
|
|||
use crate::{
|
||||
io::{AssetReader, AssetReaderError, PathStream, Reader},
|
||||
io::{AssetReader, AssetReaderError, AssetSourceId, PathStream, Reader},
|
||||
processor::{AssetProcessorData, ProcessStatus},
|
||||
AssetPath,
|
||||
};
|
||||
|
@ -15,13 +15,19 @@ use std::{path::Path, pin::Pin, sync::Arc};
|
|||
/// [`AssetProcessor`]: crate::processor::AssetProcessor
|
||||
pub struct ProcessorGatedReader {
|
||||
reader: Box<dyn AssetReader>,
|
||||
source: AssetSourceId<'static>,
|
||||
processor_data: Arc<AssetProcessorData>,
|
||||
}
|
||||
|
||||
impl ProcessorGatedReader {
|
||||
/// Creates a new [`ProcessorGatedReader`].
|
||||
pub fn new(reader: Box<dyn AssetReader>, processor_data: Arc<AssetProcessorData>) -> Self {
|
||||
pub fn new(
|
||||
source: AssetSourceId<'static>,
|
||||
reader: Box<dyn AssetReader>,
|
||||
processor_data: Arc<AssetProcessorData>,
|
||||
) -> Self {
|
||||
Self {
|
||||
source,
|
||||
processor_data,
|
||||
reader,
|
||||
}
|
||||
|
@ -31,12 +37,12 @@ impl ProcessorGatedReader {
|
|||
/// while it is held.
|
||||
async fn get_transaction_lock(
|
||||
&self,
|
||||
path: &Path,
|
||||
path: &AssetPath<'static>,
|
||||
) -> Result<RwLockReadGuardArc<()>, AssetReaderError> {
|
||||
let infos = self.processor_data.asset_infos.read().await;
|
||||
let info = infos
|
||||
.get(&AssetPath::from_path(path.to_path_buf()))
|
||||
.ok_or_else(|| AssetReaderError::NotFound(path.to_owned()))?;
|
||||
.get(path)
|
||||
.ok_or_else(|| AssetReaderError::NotFound(path.path().to_owned()))?;
|
||||
Ok(info.file_transaction_lock.read_arc().await)
|
||||
}
|
||||
}
|
||||
|
@ -47,20 +53,20 @@ impl AssetReader for ProcessorGatedReader {
|
|||
path: &'a Path,
|
||||
) -> BoxedFuture<'a, Result<Box<Reader<'a>>, AssetReaderError>> {
|
||||
Box::pin(async move {
|
||||
trace!("Waiting for processing to finish before reading {:?}", path);
|
||||
let process_result = self.processor_data.wait_until_processed(path).await;
|
||||
let asset_path = AssetPath::from(path.to_path_buf()).with_source(self.source.clone());
|
||||
trace!("Waiting for processing to finish before reading {asset_path}");
|
||||
let process_result = self
|
||||
.processor_data
|
||||
.wait_until_processed(asset_path.clone())
|
||||
.await;
|
||||
match process_result {
|
||||
ProcessStatus::Processed => {}
|
||||
ProcessStatus::Failed | ProcessStatus::NonExistent => {
|
||||
return Err(AssetReaderError::NotFound(path.to_owned()))
|
||||
return Err(AssetReaderError::NotFound(path.to_owned()));
|
||||
}
|
||||
}
|
||||
trace!(
|
||||
"Processing finished with {:?}, reading {:?}",
|
||||
process_result,
|
||||
path
|
||||
);
|
||||
let lock = self.get_transaction_lock(path).await?;
|
||||
trace!("Processing finished with {asset_path}, reading {process_result:?}",);
|
||||
let lock = self.get_transaction_lock(&asset_path).await?;
|
||||
let asset_reader = self.reader.read(path).await?;
|
||||
let reader: Box<Reader<'a>> =
|
||||
Box::new(TransactionLockedReader::new(asset_reader, lock));
|
||||
|
@ -73,23 +79,20 @@ impl AssetReader for ProcessorGatedReader {
|
|||
path: &'a Path,
|
||||
) -> BoxedFuture<'a, Result<Box<Reader<'a>>, AssetReaderError>> {
|
||||
Box::pin(async move {
|
||||
trace!(
|
||||
"Waiting for processing to finish before reading meta {:?}",
|
||||
path
|
||||
);
|
||||
let process_result = self.processor_data.wait_until_processed(path).await;
|
||||
let asset_path = AssetPath::from(path.to_path_buf()).with_source(self.source.clone());
|
||||
trace!("Waiting for processing to finish before reading meta for {asset_path}",);
|
||||
let process_result = self
|
||||
.processor_data
|
||||
.wait_until_processed(asset_path.clone())
|
||||
.await;
|
||||
match process_result {
|
||||
ProcessStatus::Processed => {}
|
||||
ProcessStatus::Failed | ProcessStatus::NonExistent => {
|
||||
return Err(AssetReaderError::NotFound(path.to_owned()));
|
||||
}
|
||||
}
|
||||
trace!(
|
||||
"Processing finished with {:?}, reading meta {:?}",
|
||||
process_result,
|
||||
path
|
||||
);
|
||||
let lock = self.get_transaction_lock(path).await?;
|
||||
trace!("Processing finished with {process_result:?}, reading meta for {asset_path}",);
|
||||
let lock = self.get_transaction_lock(&asset_path).await?;
|
||||
let meta_reader = self.reader.read_meta(path).await?;
|
||||
let reader: Box<Reader<'a>> = Box::new(TransactionLockedReader::new(meta_reader, lock));
|
||||
Ok(reader)
|
||||
|
@ -127,13 +130,6 @@ impl AssetReader for ProcessorGatedReader {
|
|||
Ok(result)
|
||||
})
|
||||
}
|
||||
|
||||
fn watch_for_changes(
|
||||
&self,
|
||||
event_sender: crossbeam_channel::Sender<super::AssetSourceEvent>,
|
||||
) -> Option<Box<dyn super::AssetWatcher>> {
|
||||
self.reader.watch_for_changes(event_sender)
|
||||
}
|
||||
}
|
||||
|
||||
/// An [`AsyncRead`] impl that will hold its asset's transaction lock until [`TransactionLockedReader`] is dropped.
|
||||
|
|
|
@ -1,190 +0,0 @@
|
|||
use bevy_ecs::system::Resource;
|
||||
use bevy_utils::HashMap;
|
||||
|
||||
use crate::{
|
||||
io::{AssetReader, AssetWriter},
|
||||
AssetPlugin,
|
||||
};
|
||||
|
||||
/// A reference to an "asset provider", which maps to an [`AssetReader`] and/or [`AssetWriter`].
|
||||
#[derive(Default, Clone, Debug)]
|
||||
pub enum AssetProvider {
|
||||
/// The default asset provider
|
||||
#[default]
|
||||
Default,
|
||||
/// A custom / named asset provider
|
||||
Custom(String),
|
||||
}
|
||||
|
||||
/// A [`Resource`] that hold (repeatable) functions capable of producing new [`AssetReader`] and [`AssetWriter`] instances
|
||||
/// for a given [`AssetProvider`].
|
||||
#[derive(Resource, Default)]
|
||||
pub struct AssetProviders {
|
||||
readers: HashMap<String, Box<dyn FnMut() -> Box<dyn AssetReader> + Send + Sync>>,
|
||||
writers: HashMap<String, Box<dyn FnMut() -> Box<dyn AssetWriter> + Send + Sync>>,
|
||||
default_file_source: Option<String>,
|
||||
default_file_destination: Option<String>,
|
||||
}
|
||||
|
||||
impl AssetProviders {
|
||||
/// Inserts a new `get_reader` function with the given `provider` name. This function will be used to create new [`AssetReader`]s
|
||||
/// when they are requested for the given `provider`.
|
||||
pub fn insert_reader(
|
||||
&mut self,
|
||||
provider: &str,
|
||||
get_reader: impl FnMut() -> Box<dyn AssetReader> + Send + Sync + 'static,
|
||||
) {
|
||||
self.readers
|
||||
.insert(provider.to_string(), Box::new(get_reader));
|
||||
}
|
||||
/// Inserts a new `get_reader` function with the given `provider` name. This function will be used to create new [`AssetReader`]s
|
||||
/// when they are requested for the given `provider`.
|
||||
pub fn with_reader(
|
||||
mut self,
|
||||
provider: &str,
|
||||
get_reader: impl FnMut() -> Box<dyn AssetReader> + Send + Sync + 'static,
|
||||
) -> Self {
|
||||
self.insert_reader(provider, get_reader);
|
||||
self
|
||||
}
|
||||
/// Inserts a new `get_writer` function with the given `provider` name. This function will be used to create new [`AssetWriter`]s
|
||||
/// when they are requested for the given `provider`.
|
||||
pub fn insert_writer(
|
||||
&mut self,
|
||||
provider: &str,
|
||||
get_writer: impl FnMut() -> Box<dyn AssetWriter> + Send + Sync + 'static,
|
||||
) {
|
||||
self.writers
|
||||
.insert(provider.to_string(), Box::new(get_writer));
|
||||
}
|
||||
/// Inserts a new `get_writer` function with the given `provider` name. This function will be used to create new [`AssetWriter`]s
|
||||
/// when they are requested for the given `provider`.
|
||||
pub fn with_writer(
|
||||
mut self,
|
||||
provider: &str,
|
||||
get_writer: impl FnMut() -> Box<dyn AssetWriter> + Send + Sync + 'static,
|
||||
) -> Self {
|
||||
self.insert_writer(provider, get_writer);
|
||||
self
|
||||
}
|
||||
/// Returns the default "asset source" path for the [`FileAssetReader`] and [`FileAssetWriter`].
|
||||
///
|
||||
/// [`FileAssetReader`]: crate::io::file::FileAssetReader
|
||||
/// [`FileAssetWriter`]: crate::io::file::FileAssetWriter
|
||||
pub fn default_file_source(&self) -> &str {
|
||||
self.default_file_source
|
||||
.as_deref()
|
||||
.unwrap_or(AssetPlugin::DEFAULT_FILE_SOURCE)
|
||||
}
|
||||
|
||||
/// Sets the default "asset source" path for the [`FileAssetReader`] and [`FileAssetWriter`].
|
||||
///
|
||||
/// [`FileAssetReader`]: crate::io::file::FileAssetReader
|
||||
/// [`FileAssetWriter`]: crate::io::file::FileAssetWriter
|
||||
pub fn with_default_file_source(mut self, path: String) -> Self {
|
||||
self.default_file_source = Some(path);
|
||||
self
|
||||
}
|
||||
|
||||
/// Sets the default "asset destination" path for the [`FileAssetReader`] and [`FileAssetWriter`].
|
||||
///
|
||||
/// [`FileAssetReader`]: crate::io::file::FileAssetReader
|
||||
/// [`FileAssetWriter`]: crate::io::file::FileAssetWriter
|
||||
pub fn with_default_file_destination(mut self, path: String) -> Self {
|
||||
self.default_file_destination = Some(path);
|
||||
self
|
||||
}
|
||||
|
||||
/// Returns the default "asset destination" path for the [`FileAssetReader`] and [`FileAssetWriter`].
|
||||
///
|
||||
/// [`FileAssetReader`]: crate::io::file::FileAssetReader
|
||||
/// [`FileAssetWriter`]: crate::io::file::FileAssetWriter
|
||||
pub fn default_file_destination(&self) -> &str {
|
||||
self.default_file_destination
|
||||
.as_deref()
|
||||
.unwrap_or(AssetPlugin::DEFAULT_FILE_DESTINATION)
|
||||
}
|
||||
|
||||
/// Returns a new "source" [`AssetReader`] for the given [`AssetProvider`].
|
||||
pub fn get_source_reader(&mut self, provider: &AssetProvider) -> Box<dyn AssetReader> {
|
||||
match provider {
|
||||
AssetProvider::Default => {
|
||||
#[cfg(all(not(target_arch = "wasm32"), not(target_os = "android")))]
|
||||
let reader = super::file::FileAssetReader::new(self.default_file_source());
|
||||
#[cfg(target_arch = "wasm32")]
|
||||
let reader = super::wasm::HttpWasmAssetReader::new(self.default_file_source());
|
||||
#[cfg(target_os = "android")]
|
||||
let reader = super::android::AndroidAssetReader;
|
||||
Box::new(reader)
|
||||
}
|
||||
AssetProvider::Custom(provider) => {
|
||||
let get_reader = self
|
||||
.readers
|
||||
.get_mut(provider)
|
||||
.unwrap_or_else(|| panic!("Asset Provider {} does not exist", provider));
|
||||
(get_reader)()
|
||||
}
|
||||
}
|
||||
}
|
||||
/// Returns a new "destination" [`AssetReader`] for the given [`AssetProvider`].
|
||||
pub fn get_destination_reader(&mut self, provider: &AssetProvider) -> Box<dyn AssetReader> {
|
||||
match provider {
|
||||
AssetProvider::Default => {
|
||||
#[cfg(all(not(target_arch = "wasm32"), not(target_os = "android")))]
|
||||
let reader = super::file::FileAssetReader::new(self.default_file_destination());
|
||||
#[cfg(target_arch = "wasm32")]
|
||||
let reader = super::wasm::HttpWasmAssetReader::new(self.default_file_destination());
|
||||
#[cfg(target_os = "android")]
|
||||
let reader = super::android::AndroidAssetReader;
|
||||
Box::new(reader)
|
||||
}
|
||||
AssetProvider::Custom(provider) => {
|
||||
let get_reader = self
|
||||
.readers
|
||||
.get_mut(provider)
|
||||
.unwrap_or_else(|| panic!("Asset Provider {} does not exist", provider));
|
||||
(get_reader)()
|
||||
}
|
||||
}
|
||||
}
|
||||
/// Returns a new "source" [`AssetWriter`] for the given [`AssetProvider`].
|
||||
pub fn get_source_writer(&mut self, provider: &AssetProvider) -> Box<dyn AssetWriter> {
|
||||
match provider {
|
||||
AssetProvider::Default => {
|
||||
#[cfg(all(not(target_arch = "wasm32"), not(target_os = "android")))]
|
||||
return Box::new(super::file::FileAssetWriter::new(
|
||||
self.default_file_source(),
|
||||
));
|
||||
#[cfg(any(target_arch = "wasm32", target_os = "android"))]
|
||||
panic!("Writing assets isn't supported on this platform yet");
|
||||
}
|
||||
AssetProvider::Custom(provider) => {
|
||||
let get_writer = self
|
||||
.writers
|
||||
.get_mut(provider)
|
||||
.unwrap_or_else(|| panic!("Asset Provider {} does not exist", provider));
|
||||
(get_writer)()
|
||||
}
|
||||
}
|
||||
}
|
||||
/// Returns a new "destination" [`AssetWriter`] for the given [`AssetProvider`].
|
||||
pub fn get_destination_writer(&mut self, provider: &AssetProvider) -> Box<dyn AssetWriter> {
|
||||
match provider {
|
||||
AssetProvider::Default => {
|
||||
#[cfg(all(not(target_arch = "wasm32"), not(target_os = "android")))]
|
||||
return Box::new(super::file::FileAssetWriter::new(
|
||||
self.default_file_destination(),
|
||||
));
|
||||
#[cfg(any(target_arch = "wasm32", target_os = "android"))]
|
||||
panic!("Writing assets isn't supported on this platform yet");
|
||||
}
|
||||
AssetProvider::Custom(provider) => {
|
||||
let get_writer = self
|
||||
.writers
|
||||
.get_mut(provider)
|
||||
.unwrap_or_else(|| panic!("Asset Provider {} does not exist", provider));
|
||||
(get_writer)()
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
553
crates/bevy_asset/src/io/source.rs
Normal file
553
crates/bevy_asset/src/io/source.rs
Normal file
|
@ -0,0 +1,553 @@
|
|||
use crate::{
|
||||
io::{
|
||||
processor_gated::ProcessorGatedReader, AssetReader, AssetSourceEvent, AssetWatcher,
|
||||
AssetWriter,
|
||||
},
|
||||
processor::AssetProcessorData,
|
||||
};
|
||||
use bevy_ecs::system::Resource;
|
||||
use bevy_log::{error, warn};
|
||||
use bevy_utils::{CowArc, Duration, HashMap};
|
||||
use std::{fmt::Display, hash::Hash, sync::Arc};
|
||||
use thiserror::Error;
|
||||
|
||||
/// A reference to an "asset source", which maps to an [`AssetReader`] and/or [`AssetWriter`].
|
||||
///
|
||||
/// * [`AssetSourceId::Default`] corresponds to "default asset paths" that don't specify a source: `/path/to/asset.png`
|
||||
/// * [`AssetSourceId::Name`] corresponds to asset paths that _do_ specify a source: `remote://path/to/asset.png`, where `remote` is the name.
|
||||
#[derive(Default, Clone, Debug, Eq)]
|
||||
pub enum AssetSourceId<'a> {
|
||||
/// The default asset source.
|
||||
#[default]
|
||||
Default,
|
||||
/// A non-default named asset source.
|
||||
Name(CowArc<'a, str>),
|
||||
}
|
||||
|
||||
impl<'a> Display for AssetSourceId<'a> {
|
||||
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
|
||||
match self.as_str() {
|
||||
None => write!(f, "AssetSourceId::Default"),
|
||||
Some(v) => write!(f, "AssetSourceId::Name({v})"),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl<'a> AssetSourceId<'a> {
|
||||
/// Creates a new [`AssetSourceId`]
|
||||
pub fn new(source: Option<impl Into<CowArc<'a, str>>>) -> AssetSourceId<'a> {
|
||||
match source {
|
||||
Some(source) => AssetSourceId::Name(source.into()),
|
||||
None => AssetSourceId::Default,
|
||||
}
|
||||
}
|
||||
|
||||
/// Returns [`None`] if this is [`AssetSourceId::Default`] and [`Some`] containing the
|
||||
/// the name if this is [`AssetSourceId::Name`].
|
||||
pub fn as_str(&self) -> Option<&str> {
|
||||
match self {
|
||||
AssetSourceId::Default => None,
|
||||
AssetSourceId::Name(v) => Some(v),
|
||||
}
|
||||
}
|
||||
|
||||
/// If this is not already an owned / static id, create one. Otherwise, it will return itself (with a static lifetime).
|
||||
pub fn into_owned(self) -> AssetSourceId<'static> {
|
||||
match self {
|
||||
AssetSourceId::Default => AssetSourceId::Default,
|
||||
AssetSourceId::Name(v) => AssetSourceId::Name(v.into_owned()),
|
||||
}
|
||||
}
|
||||
|
||||
/// Clones into an owned [`AssetSourceId<'static>`].
|
||||
/// This is equivalent to `.clone().into_owned()`.
|
||||
#[inline]
|
||||
pub fn clone_owned(&self) -> AssetSourceId<'static> {
|
||||
self.clone().into_owned()
|
||||
}
|
||||
}
|
||||
|
||||
impl From<&'static str> for AssetSourceId<'static> {
|
||||
fn from(value: &'static str) -> Self {
|
||||
AssetSourceId::Name(value.into())
|
||||
}
|
||||
}
|
||||
|
||||
impl<'a, 'b> From<&'a AssetSourceId<'b>> for AssetSourceId<'b> {
|
||||
fn from(value: &'a AssetSourceId<'b>) -> Self {
|
||||
value.clone()
|
||||
}
|
||||
}
|
||||
|
||||
impl From<Option<&'static str>> for AssetSourceId<'static> {
|
||||
fn from(value: Option<&'static str>) -> Self {
|
||||
match value {
|
||||
Some(value) => AssetSourceId::Name(value.into()),
|
||||
None => AssetSourceId::Default,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl From<String> for AssetSourceId<'static> {
|
||||
fn from(value: String) -> Self {
|
||||
AssetSourceId::Name(value.into())
|
||||
}
|
||||
}
|
||||
|
||||
impl<'a> Hash for AssetSourceId<'a> {
|
||||
fn hash<H: std::hash::Hasher>(&self, state: &mut H) {
|
||||
self.as_str().hash(state);
|
||||
}
|
||||
}
|
||||
|
||||
impl<'a> PartialEq for AssetSourceId<'a> {
|
||||
fn eq(&self, other: &Self) -> bool {
|
||||
self.as_str().eq(&other.as_str())
|
||||
}
|
||||
}
|
||||
|
||||
/// Metadata about an "asset source", such as how to construct the [`AssetReader`] and [`AssetWriter`] for the source,
|
||||
/// and whether or not the source is processed.
|
||||
#[derive(Default)]
|
||||
pub struct AssetSourceBuilder {
|
||||
pub reader: Option<Box<dyn FnMut() -> Box<dyn AssetReader> + Send + Sync>>,
|
||||
pub writer: Option<Box<dyn FnMut() -> Option<Box<dyn AssetWriter>> + Send + Sync>>,
|
||||
pub watcher: Option<
|
||||
Box<
|
||||
dyn FnMut(crossbeam_channel::Sender<AssetSourceEvent>) -> Option<Box<dyn AssetWatcher>>
|
||||
+ Send
|
||||
+ Sync,
|
||||
>,
|
||||
>,
|
||||
pub processed_reader: Option<Box<dyn FnMut() -> Box<dyn AssetReader> + Send + Sync>>,
|
||||
pub processed_writer: Option<Box<dyn FnMut() -> Option<Box<dyn AssetWriter>> + Send + Sync>>,
|
||||
pub processed_watcher: Option<
|
||||
Box<
|
||||
dyn FnMut(crossbeam_channel::Sender<AssetSourceEvent>) -> Option<Box<dyn AssetWatcher>>
|
||||
+ Send
|
||||
+ Sync,
|
||||
>,
|
||||
>,
|
||||
}
|
||||
|
||||
impl AssetSourceBuilder {
|
||||
/// Builds a new [`AssetSource`] with the given `id`. If `watch` is true, the unprocessed source will watch for changes.
|
||||
/// If `watch_processed` is true, the processed source will watch for changes.
|
||||
pub fn build(
|
||||
&mut self,
|
||||
id: AssetSourceId<'static>,
|
||||
watch: bool,
|
||||
watch_processed: bool,
|
||||
) -> Option<AssetSource> {
|
||||
let reader = (self.reader.as_mut()?)();
|
||||
let writer = self.writer.as_mut().and_then(|w| (w)());
|
||||
let processed_writer = self.processed_writer.as_mut().and_then(|w| (w)());
|
||||
let mut source = AssetSource {
|
||||
id: id.clone(),
|
||||
reader,
|
||||
writer,
|
||||
processed_reader: self.processed_reader.as_mut().map(|r| (r)()),
|
||||
processed_writer,
|
||||
event_receiver: None,
|
||||
watcher: None,
|
||||
processed_event_receiver: None,
|
||||
processed_watcher: None,
|
||||
};
|
||||
|
||||
if watch {
|
||||
let (sender, receiver) = crossbeam_channel::unbounded();
|
||||
match self.watcher.as_mut().and_then(|w|(w)(sender)) {
|
||||
Some(w) => {
|
||||
source.watcher = Some(w);
|
||||
source.event_receiver = Some(receiver);
|
||||
},
|
||||
None => warn!("{id} does not have an AssetWatcher configured. Consider enabling the `file_watcher` feature. Note that Web and Android do not currently support watching assets."),
|
||||
}
|
||||
}
|
||||
|
||||
if watch_processed {
|
||||
let (sender, receiver) = crossbeam_channel::unbounded();
|
||||
match self.processed_watcher.as_mut().and_then(|w|(w)(sender)) {
|
||||
Some(w) => {
|
||||
source.processed_watcher = Some(w);
|
||||
source.processed_event_receiver = Some(receiver);
|
||||
},
|
||||
None => warn!("{id} does not have a processed AssetWatcher configured. Consider enabling the `file_watcher` feature. Note that Web and Android do not currently support watching assets."),
|
||||
}
|
||||
}
|
||||
Some(source)
|
||||
}
|
||||
|
||||
/// Will use the given `reader` function to construct unprocessed [`AssetReader`] instances.
|
||||
pub fn with_reader(
|
||||
mut self,
|
||||
reader: impl FnMut() -> Box<dyn AssetReader> + Send + Sync + 'static,
|
||||
) -> Self {
|
||||
self.reader = Some(Box::new(reader));
|
||||
self
|
||||
}
|
||||
|
||||
/// Will use the given `writer` function to construct unprocessed [`AssetWriter`] instances.
|
||||
pub fn with_writer(
|
||||
mut self,
|
||||
writer: impl FnMut() -> Option<Box<dyn AssetWriter>> + Send + Sync + 'static,
|
||||
) -> Self {
|
||||
self.writer = Some(Box::new(writer));
|
||||
self
|
||||
}
|
||||
|
||||
/// Will use the given `watcher` function to construct unprocessed [`AssetWatcher`] instances.
|
||||
pub fn with_watcher(
|
||||
mut self,
|
||||
watcher: impl FnMut(crossbeam_channel::Sender<AssetSourceEvent>) -> Option<Box<dyn AssetWatcher>>
|
||||
+ Send
|
||||
+ Sync
|
||||
+ 'static,
|
||||
) -> Self {
|
||||
self.watcher = Some(Box::new(watcher));
|
||||
self
|
||||
}
|
||||
|
||||
/// Will use the given `reader` function to construct processed [`AssetReader`] instances.
|
||||
pub fn with_processed_reader(
|
||||
mut self,
|
||||
reader: impl FnMut() -> Box<dyn AssetReader> + Send + Sync + 'static,
|
||||
) -> Self {
|
||||
self.processed_reader = Some(Box::new(reader));
|
||||
self
|
||||
}
|
||||
|
||||
/// Will use the given `writer` function to construct processed [`AssetWriter`] instances.
|
||||
pub fn with_processed_writer(
|
||||
mut self,
|
||||
writer: impl FnMut() -> Option<Box<dyn AssetWriter>> + Send + Sync + 'static,
|
||||
) -> Self {
|
||||
self.processed_writer = Some(Box::new(writer));
|
||||
self
|
||||
}
|
||||
|
||||
/// Will use the given `watcher` function to construct processed [`AssetWatcher`] instances.
|
||||
pub fn with_processed_watcher(
|
||||
mut self,
|
||||
watcher: impl FnMut(crossbeam_channel::Sender<AssetSourceEvent>) -> Option<Box<dyn AssetWatcher>>
|
||||
+ Send
|
||||
+ Sync
|
||||
+ 'static,
|
||||
) -> Self {
|
||||
self.processed_watcher = Some(Box::new(watcher));
|
||||
self
|
||||
}
|
||||
|
||||
/// Returns a builder containing the "platform default source" for the given `path` and `processed_path`.
|
||||
/// For most platforms, this will use [`FileAssetReader`](crate::io::file::FileAssetReader) / [`FileAssetWriter`](crate::io::file::FileAssetWriter),
|
||||
/// but some platforms (such as Android) have their own default readers / writers / watchers.
|
||||
pub fn platform_default(path: &str, processed_path: &str) -> Self {
|
||||
Self::default()
|
||||
.with_reader(AssetSource::get_default_reader(path.to_string()))
|
||||
.with_writer(AssetSource::get_default_writer(path.to_string()))
|
||||
.with_watcher(AssetSource::get_default_watcher(
|
||||
path.to_string(),
|
||||
Duration::from_millis(300),
|
||||
))
|
||||
.with_processed_reader(AssetSource::get_default_reader(processed_path.to_string()))
|
||||
.with_processed_writer(AssetSource::get_default_writer(processed_path.to_string()))
|
||||
.with_processed_watcher(AssetSource::get_default_watcher(
|
||||
processed_path.to_string(),
|
||||
Duration::from_millis(300),
|
||||
))
|
||||
}
|
||||
}
|
||||
|
||||
/// A [`Resource`] that hold (repeatable) functions capable of producing new [`AssetReader`] and [`AssetWriter`] instances
|
||||
/// for a given asset source.
|
||||
#[derive(Resource, Default)]
|
||||
pub struct AssetSourceBuilders {
|
||||
sources: HashMap<CowArc<'static, str>, AssetSourceBuilder>,
|
||||
default: Option<AssetSourceBuilder>,
|
||||
}
|
||||
|
||||
impl AssetSourceBuilders {
|
||||
/// Inserts a new builder with the given `id`
|
||||
pub fn insert(&mut self, id: impl Into<AssetSourceId<'static>>, source: AssetSourceBuilder) {
|
||||
match id.into() {
|
||||
AssetSourceId::Default => {
|
||||
self.default = Some(source);
|
||||
}
|
||||
AssetSourceId::Name(name) => {
|
||||
self.sources.insert(name, source);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/// Gets a mutable builder with the given `id`, if it exists.
|
||||
pub fn get_mut<'a, 'b>(
|
||||
&'a mut self,
|
||||
id: impl Into<AssetSourceId<'b>>,
|
||||
) -> Option<&'a mut AssetSourceBuilder> {
|
||||
match id.into() {
|
||||
AssetSourceId::Default => self.default.as_mut(),
|
||||
AssetSourceId::Name(name) => self.sources.get_mut(&name.into_owned()),
|
||||
}
|
||||
}
|
||||
|
||||
/// Builds a new [`AssetSources`] collection. If `watch` is true, the unprocessed sources will watch for changes.
|
||||
/// If `watch_processed` is true, the processed sources will watch for changes.
|
||||
pub fn build_sources(&mut self, watch: bool, watch_processed: bool) -> AssetSources {
|
||||
let mut sources = HashMap::new();
|
||||
for (id, source) in &mut self.sources {
|
||||
if let Some(data) = source.build(
|
||||
AssetSourceId::Name(id.clone_owned()),
|
||||
watch,
|
||||
watch_processed,
|
||||
) {
|
||||
sources.insert(id.clone_owned(), data);
|
||||
}
|
||||
}
|
||||
|
||||
AssetSources {
|
||||
sources,
|
||||
default: self
|
||||
.default
|
||||
.as_mut()
|
||||
.and_then(|p| p.build(AssetSourceId::Default, watch, watch_processed))
|
||||
.expect(MISSING_DEFAULT_SOURCE),
|
||||
}
|
||||
}
|
||||
|
||||
/// Initializes the default [`AssetSourceBuilder`] if it has not already been set.
|
||||
pub fn init_default_source(&mut self, path: &str, processed_path: &str) {
|
||||
self.default
|
||||
.get_or_insert_with(|| AssetSourceBuilder::platform_default(path, processed_path));
|
||||
}
|
||||
}
|
||||
|
||||
/// A collection of unprocessed and processed [`AssetReader`], [`AssetWriter`], and [`AssetWatcher`] instances
|
||||
/// for a specific asset source, identified by an [`AssetSourceId`].
|
||||
pub struct AssetSource {
|
||||
id: AssetSourceId<'static>,
|
||||
reader: Box<dyn AssetReader>,
|
||||
writer: Option<Box<dyn AssetWriter>>,
|
||||
processed_reader: Option<Box<dyn AssetReader>>,
|
||||
processed_writer: Option<Box<dyn AssetWriter>>,
|
||||
watcher: Option<Box<dyn AssetWatcher>>,
|
||||
processed_watcher: Option<Box<dyn AssetWatcher>>,
|
||||
event_receiver: Option<crossbeam_channel::Receiver<AssetSourceEvent>>,
|
||||
processed_event_receiver: Option<crossbeam_channel::Receiver<AssetSourceEvent>>,
|
||||
}
|
||||
|
||||
impl AssetSource {
|
||||
/// Starts building a new [`AssetSource`].
|
||||
pub fn build() -> AssetSourceBuilder {
|
||||
AssetSourceBuilder::default()
|
||||
}
|
||||
|
||||
/// Returns this source's id.
|
||||
#[inline]
|
||||
pub fn id(&self) -> AssetSourceId<'static> {
|
||||
self.id.clone()
|
||||
}
|
||||
|
||||
/// Return's this source's unprocessed [`AssetReader`].
|
||||
#[inline]
|
||||
pub fn reader(&self) -> &dyn AssetReader {
|
||||
&*self.reader
|
||||
}
|
||||
|
||||
/// Return's this source's unprocessed [`AssetWriter`], if it exists.
|
||||
#[inline]
|
||||
pub fn writer(&self) -> Result<&dyn AssetWriter, MissingAssetWriterError> {
|
||||
self.writer
|
||||
.as_deref()
|
||||
.ok_or_else(|| MissingAssetWriterError(self.id.clone_owned()))
|
||||
}
|
||||
|
||||
/// Return's this source's processed [`AssetReader`], if it exists.
|
||||
#[inline]
|
||||
pub fn processed_reader(&self) -> Result<&dyn AssetReader, MissingProcessedAssetReaderError> {
|
||||
self.processed_reader
|
||||
.as_deref()
|
||||
.ok_or_else(|| MissingProcessedAssetReaderError(self.id.clone_owned()))
|
||||
}
|
||||
|
||||
/// Return's this source's processed [`AssetWriter`], if it exists.
|
||||
#[inline]
|
||||
pub fn processed_writer(&self) -> Result<&dyn AssetWriter, MissingProcessedAssetWriterError> {
|
||||
self.processed_writer
|
||||
.as_deref()
|
||||
.ok_or_else(|| MissingProcessedAssetWriterError(self.id.clone_owned()))
|
||||
}
|
||||
|
||||
/// Return's this source's unprocessed event receiver, if the source is currently watching for changes.
|
||||
#[inline]
|
||||
pub fn event_receiver(&self) -> Option<&crossbeam_channel::Receiver<AssetSourceEvent>> {
|
||||
self.event_receiver.as_ref()
|
||||
}
|
||||
|
||||
/// Return's this source's processed event receiver, if the source is currently watching for changes.
|
||||
#[inline]
|
||||
pub fn processed_event_receiver(
|
||||
&self,
|
||||
) -> Option<&crossbeam_channel::Receiver<AssetSourceEvent>> {
|
||||
self.processed_event_receiver.as_ref()
|
||||
}
|
||||
|
||||
/// Returns true if the assets in this source should be processed.
|
||||
#[inline]
|
||||
pub fn should_process(&self) -> bool {
|
||||
self.processed_writer.is_some()
|
||||
}
|
||||
|
||||
/// Returns a builder function for this platform's default [`AssetReader`]. `path` is the relative path to
|
||||
/// the asset root.
|
||||
pub fn get_default_reader(_path: String) -> impl FnMut() -> Box<dyn AssetReader> + Send + Sync {
|
||||
move || {
|
||||
#[cfg(all(not(target_arch = "wasm32"), not(target_os = "android")))]
|
||||
return Box::new(super::file::FileAssetReader::new(&_path));
|
||||
#[cfg(target_arch = "wasm32")]
|
||||
return Box::new(super::wasm::HttpWasmAssetReader::new(&_path));
|
||||
#[cfg(target_os = "android")]
|
||||
return Box::new(super::android::AndroidAssetReader);
|
||||
}
|
||||
}
|
||||
|
||||
/// Returns a builder function for this platform's default [`AssetWriter`]. `path` is the relative path to
|
||||
/// the asset root. This will return [`None`] if this platform does not support writing assets by default.
|
||||
pub fn get_default_writer(
|
||||
_path: String,
|
||||
) -> impl FnMut() -> Option<Box<dyn AssetWriter>> + Send + Sync {
|
||||
move || {
|
||||
#[cfg(all(not(target_arch = "wasm32"), not(target_os = "android")))]
|
||||
return Some(Box::new(super::file::FileAssetWriter::new(&_path)));
|
||||
#[cfg(any(target_arch = "wasm32", target_os = "android"))]
|
||||
return None;
|
||||
}
|
||||
}
|
||||
|
||||
/// Returns a builder function for this platform's default [`AssetWatcher`]. `path` is the relative path to
|
||||
/// the asset root. This will return [`None`] if this platform does not support watching assets by default.
|
||||
/// `file_debounce_time` is the amount of time to wait (and debounce duplicate events) before returning an event.
|
||||
/// Higher durations reduce duplicates but increase the amount of time before a change event is processed. If the
|
||||
/// duration is set too low, some systems might surface events _before_ their filesystem has the changes.
|
||||
#[allow(unused)]
|
||||
pub fn get_default_watcher(
|
||||
path: String,
|
||||
file_debounce_wait_time: Duration,
|
||||
) -> impl FnMut(crossbeam_channel::Sender<AssetSourceEvent>) -> Option<Box<dyn AssetWatcher>>
|
||||
+ Send
|
||||
+ Sync {
|
||||
move |sender: crossbeam_channel::Sender<AssetSourceEvent>| {
|
||||
#[cfg(all(
|
||||
feature = "file_watcher",
|
||||
not(target_arch = "wasm32"),
|
||||
not(target_os = "android")
|
||||
))]
|
||||
return Some(Box::new(
|
||||
super::file::FileWatcher::new(
|
||||
std::path::PathBuf::from(path.clone()),
|
||||
sender,
|
||||
file_debounce_wait_time,
|
||||
)
|
||||
.unwrap(),
|
||||
));
|
||||
#[cfg(any(
|
||||
not(feature = "file_watcher"),
|
||||
target_arch = "wasm32",
|
||||
target_os = "android"
|
||||
))]
|
||||
return None;
|
||||
}
|
||||
}
|
||||
|
||||
/// This will cause processed [`AssetReader`] futures (such as [`AssetReader::read`]) to wait until
|
||||
/// the [`AssetProcessor`](crate::AssetProcessor) has finished processing the requested asset.
|
||||
pub fn gate_on_processor(&mut self, processor_data: Arc<AssetProcessorData>) {
|
||||
if let Some(reader) = self.processed_reader.take() {
|
||||
self.processed_reader = Some(Box::new(ProcessorGatedReader::new(
|
||||
self.id(),
|
||||
reader,
|
||||
processor_data,
|
||||
)));
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/// A collection of [`AssetSources`].
|
||||
pub struct AssetSources {
|
||||
sources: HashMap<CowArc<'static, str>, AssetSource>,
|
||||
default: AssetSource,
|
||||
}
|
||||
|
||||
impl AssetSources {
|
||||
/// Gets the [`AssetSource`] with the given `id`, if it exists.
|
||||
pub fn get<'a, 'b>(
|
||||
&'a self,
|
||||
id: impl Into<AssetSourceId<'b>>,
|
||||
) -> Result<&'a AssetSource, MissingAssetSourceError> {
|
||||
match id.into().into_owned() {
|
||||
AssetSourceId::Default => Ok(&self.default),
|
||||
AssetSourceId::Name(name) => self
|
||||
.sources
|
||||
.get(&name)
|
||||
.ok_or_else(|| MissingAssetSourceError(AssetSourceId::Name(name))),
|
||||
}
|
||||
}
|
||||
|
||||
/// Iterates all asset sources in the collection (including the default source).
|
||||
pub fn iter(&self) -> impl Iterator<Item = &AssetSource> {
|
||||
self.sources.values().chain(Some(&self.default))
|
||||
}
|
||||
|
||||
/// Mutably iterates all asset sources in the collection (including the default source).
|
||||
pub fn iter_mut(&mut self) -> impl Iterator<Item = &mut AssetSource> {
|
||||
self.sources.values_mut().chain(Some(&mut self.default))
|
||||
}
|
||||
|
||||
/// Iterates all processed asset sources in the collection (including the default source).
|
||||
pub fn iter_processed(&self) -> impl Iterator<Item = &AssetSource> {
|
||||
self.iter().filter(|p| p.should_process())
|
||||
}
|
||||
|
||||
/// Mutably iterates all processed asset sources in the collection (including the default source).
|
||||
pub fn iter_processed_mut(&mut self) -> impl Iterator<Item = &mut AssetSource> {
|
||||
self.iter_mut().filter(|p| p.should_process())
|
||||
}
|
||||
|
||||
/// Iterates over the [`AssetSourceId`] of every [`AssetSource`] in the collection (including the default source).
|
||||
pub fn ids(&self) -> impl Iterator<Item = AssetSourceId<'static>> + '_ {
|
||||
self.sources
|
||||
.keys()
|
||||
.map(|k| AssetSourceId::Name(k.clone_owned()))
|
||||
.chain(Some(AssetSourceId::Default))
|
||||
}
|
||||
|
||||
/// This will cause processed [`AssetReader`] futures (such as [`AssetReader::read`]) to wait until
|
||||
/// the [`AssetProcessor`](crate::AssetProcessor) has finished processing the requested asset.
|
||||
pub fn gate_on_processor(&mut self, processor_data: Arc<AssetProcessorData>) {
|
||||
for source in self.iter_processed_mut() {
|
||||
source.gate_on_processor(processor_data.clone());
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/// An error returned when an [`AssetSource`] does not exist for a given id.
|
||||
#[derive(Error, Debug)]
|
||||
#[error("Asset Source '{0}' does not exist")]
|
||||
pub struct MissingAssetSourceError(AssetSourceId<'static>);
|
||||
|
||||
/// An error returned when an [`AssetWriter`] does not exist for a given id.
|
||||
#[derive(Error, Debug)]
|
||||
#[error("Asset Source '{0}' does not have an AssetWriter.")]
|
||||
pub struct MissingAssetWriterError(AssetSourceId<'static>);
|
||||
|
||||
/// An error returned when a processed [`AssetReader`] does not exist for a given id.
|
||||
#[derive(Error, Debug)]
|
||||
#[error("Asset Source '{0}' does not have a processed AssetReader.")]
|
||||
pub struct MissingProcessedAssetReaderError(AssetSourceId<'static>);
|
||||
|
||||
/// An error returned when a processed [`AssetWriter`] does not exist for a given id.
|
||||
#[derive(Error, Debug)]
|
||||
#[error("Asset Source '{0}' does not have a processed AssetWriter.")]
|
||||
pub struct MissingProcessedAssetWriterError(AssetSourceId<'static>);
|
||||
|
||||
const MISSING_DEFAULT_SOURCE: &str =
|
||||
"A default AssetSource is required. Add one to `AssetSourceBuilders`";
|
|
@ -1,6 +1,5 @@
|
|||
use crate::io::{
|
||||
get_meta_path, AssetReader, AssetReaderError, AssetWatcher, EmptyPathStream, PathStream,
|
||||
Reader, VecReader,
|
||||
get_meta_path, AssetReader, AssetReaderError, EmptyPathStream, PathStream, Reader, VecReader,
|
||||
};
|
||||
use bevy_log::error;
|
||||
use bevy_utils::BoxedFuture;
|
||||
|
@ -99,11 +98,4 @@ impl AssetReader for HttpWasmAssetReader {
|
|||
error!("Reading directories is not supported with the HttpWasmAssetReader");
|
||||
Box::pin(async move { Ok(false) })
|
||||
}
|
||||
|
||||
fn watch_for_changes(
|
||||
&self,
|
||||
_event_sender: crossbeam_channel::Sender<super::AssetSourceEvent>,
|
||||
) -> Option<Box<dyn AssetWatcher>> {
|
||||
None
|
||||
}
|
||||
}
|
||||
|
|
|
@ -8,7 +8,7 @@ pub mod saver;
|
|||
pub mod prelude {
|
||||
#[doc(hidden)]
|
||||
pub use crate::{
|
||||
Asset, AssetApp, AssetEvent, AssetId, AssetPlugin, AssetServer, Assets, Handle,
|
||||
Asset, AssetApp, AssetEvent, AssetId, AssetMode, AssetPlugin, AssetServer, Assets, Handle,
|
||||
UntypedHandle,
|
||||
};
|
||||
}
|
||||
|
@ -38,7 +38,7 @@ pub use server::*;
|
|||
pub use bevy_utils::BoxedFuture;
|
||||
|
||||
use crate::{
|
||||
io::{processor_gated::ProcessorGatedReader, AssetProvider, AssetProviders},
|
||||
io::{embedded::EmbeddedAssetRegistry, AssetSourceBuilder, AssetSourceBuilders, AssetSourceId},
|
||||
processor::{AssetProcessor, Process},
|
||||
};
|
||||
use bevy_app::{App, First, MainScheduleOrder, Plugin, PostUpdate, Startup};
|
||||
|
@ -50,141 +50,124 @@ use bevy_ecs::{
|
|||
use bevy_reflect::{FromReflect, GetTypeRegistration, Reflect, TypePath};
|
||||
use std::{any::TypeId, sync::Arc};
|
||||
|
||||
/// Provides "asset" loading and processing functionality. An [`Asset`] is a "runtime value" that is loaded from an [`AssetProvider`],
|
||||
/// Provides "asset" loading and processing functionality. An [`Asset`] is a "runtime value" that is loaded from an [`AssetSource`],
|
||||
/// which can be something like a filesystem, a network, etc.
|
||||
///
|
||||
/// Supports flexible "modes", such as [`AssetPlugin::Processed`] and
|
||||
/// [`AssetPlugin::Unprocessed`] that enable using the asset workflow that best suits your project.
|
||||
pub enum AssetPlugin {
|
||||
/// Loads assets without any "preprocessing" from the configured asset `source` (defaults to the `assets` folder).
|
||||
Unprocessed {
|
||||
source: AssetProvider,
|
||||
watch_for_changes: bool,
|
||||
},
|
||||
/// Loads "processed" assets from a given `destination` source (defaults to the `imported_assets/Default` folder). This should
|
||||
/// generally only be used when distributing apps. Use [`AssetPlugin::ProcessedDev`] to develop apps that process assets,
|
||||
/// then switch to [`AssetPlugin::Processed`] when deploying the apps.
|
||||
Processed {
|
||||
destination: AssetProvider,
|
||||
watch_for_changes: bool,
|
||||
},
|
||||
/// Starts an [`AssetProcessor`] in the background that reads assets from the `source` provider (defaults to the `assets` folder),
|
||||
/// processes them according to their [`AssetMeta`], and writes them to the `destination` provider (defaults to the `imported_assets/Default` folder).
|
||||
/// Supports flexible "modes", such as [`AssetMode::Processed`] and
|
||||
/// [`AssetMode::Unprocessed`] that enable using the asset workflow that best suits your project.
|
||||
///
|
||||
/// [`AssetSource`]: crate::io::AssetSource
|
||||
pub struct AssetPlugin {
|
||||
/// The default file path to use (relative to the project root) for unprocessed assets.
|
||||
pub file_path: String,
|
||||
/// The default file path to use (relative to the project root) for processed assets.
|
||||
pub processed_file_path: String,
|
||||
/// If set, will override the default "watch for changes" setting. By default "watch for changes" will be `false` unless
|
||||
/// the `watch` cargo feature is set. `watch` can be enabled manually, or it will be automatically enabled if a specific watcher
|
||||
/// like `file_watcher` is enabled.
|
||||
///
|
||||
/// By default this will hot reload changes to the `source` provider, resulting in reprocessing the asset and reloading it in the [`App`].
|
||||
/// Most use cases should leave this set to [`None`] and enable a specific watcher feature such as `file_watcher` to enable
|
||||
/// watching for dev-scenarios.
|
||||
pub watch_for_changes_override: Option<bool>,
|
||||
/// The [`AssetMode`] to use for this server.
|
||||
pub mode: AssetMode,
|
||||
}
|
||||
|
||||
pub enum AssetMode {
|
||||
/// Loads assets from their [`AssetSource`]'s default [`AssetReader`] without any "preprocessing".
|
||||
///
|
||||
/// [`AssetReader`]: crate::io::AssetReader
|
||||
/// [`AssetSource`]: crate::io::AssetSource
|
||||
Unprocessed,
|
||||
/// Loads assets from their final processed [`AssetReader`]. This should generally only be used when distributing apps.
|
||||
/// Use [`AssetMode::ProcessedDev`] to develop apps that process assets, then switch to [`AssetMode::Processed`] when deploying the apps.
|
||||
///
|
||||
/// [`AssetReader`]: crate::io::AssetReader
|
||||
Processed,
|
||||
/// Starts an [`AssetProcessor`] in the background that reads assets from their unprocessed [`AssetSource`] (defaults to the `assets` folder),
|
||||
/// processes them according to their [`AssetMeta`], and writes them to their processed [`AssetSource`] (defaults to the `imported_assets/Default` folder).
|
||||
///
|
||||
/// Apps will load assets from the processed [`AssetSource`]. Asset loads will wait until the asset processor has finished processing the requested asset.
|
||||
///
|
||||
/// This should generally be used in combination with the `file_watcher` cargo feature to support hot-reloading and re-processing assets.
|
||||
///
|
||||
/// [`AssetMeta`]: crate::meta::AssetMeta
|
||||
ProcessedDev {
|
||||
source: AssetProvider,
|
||||
destination: AssetProvider,
|
||||
watch_for_changes: bool,
|
||||
},
|
||||
/// [`AssetSource`]: crate::io::AssetSource
|
||||
ProcessedDev,
|
||||
}
|
||||
|
||||
impl Default for AssetPlugin {
|
||||
fn default() -> Self {
|
||||
Self::unprocessed()
|
||||
Self {
|
||||
mode: AssetMode::Unprocessed,
|
||||
file_path: Self::DEFAULT_UNPROCESSED_FILE_PATH.to_string(),
|
||||
processed_file_path: Self::DEFAULT_PROCESSED_FILE_PATH.to_string(),
|
||||
watch_for_changes_override: None,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl AssetPlugin {
|
||||
const DEFAULT_FILE_SOURCE: &'static str = "assets";
|
||||
const DEFAULT_UNPROCESSED_FILE_PATH: &'static str = "assets";
|
||||
/// NOTE: this is in the Default sub-folder to make this forward compatible with "import profiles"
|
||||
/// and to allow us to put the "processor transaction log" at `imported_assets/log`
|
||||
const DEFAULT_FILE_DESTINATION: &'static str = "imported_assets/Default";
|
||||
|
||||
/// Returns the default [`AssetPlugin::Processed`] configuration
|
||||
pub fn processed() -> Self {
|
||||
Self::Processed {
|
||||
destination: Default::default(),
|
||||
watch_for_changes: false,
|
||||
}
|
||||
}
|
||||
|
||||
/// Returns the default [`AssetPlugin::ProcessedDev`] configuration
|
||||
pub fn processed_dev() -> Self {
|
||||
Self::ProcessedDev {
|
||||
source: Default::default(),
|
||||
destination: Default::default(),
|
||||
watch_for_changes: true,
|
||||
}
|
||||
}
|
||||
|
||||
/// Returns the default [`AssetPlugin::Unprocessed`] configuration
|
||||
pub fn unprocessed() -> Self {
|
||||
Self::Unprocessed {
|
||||
source: Default::default(),
|
||||
watch_for_changes: false,
|
||||
}
|
||||
}
|
||||
|
||||
/// Enables watching for changes, which will hot-reload assets when they change.
|
||||
pub fn watch_for_changes(mut self) -> Self {
|
||||
match &mut self {
|
||||
AssetPlugin::Unprocessed {
|
||||
watch_for_changes, ..
|
||||
}
|
||||
| AssetPlugin::Processed {
|
||||
watch_for_changes, ..
|
||||
}
|
||||
| AssetPlugin::ProcessedDev {
|
||||
watch_for_changes, ..
|
||||
} => *watch_for_changes = true,
|
||||
};
|
||||
self
|
||||
}
|
||||
const DEFAULT_PROCESSED_FILE_PATH: &'static str = "imported_assets/Default";
|
||||
}
|
||||
|
||||
impl Plugin for AssetPlugin {
|
||||
fn build(&self, app: &mut App) {
|
||||
app.init_schedule(UpdateAssets)
|
||||
.init_schedule(AssetEvents)
|
||||
.init_resource::<AssetProviders>();
|
||||
app.init_schedule(UpdateAssets).init_schedule(AssetEvents);
|
||||
let embedded = EmbeddedAssetRegistry::default();
|
||||
{
|
||||
match self {
|
||||
AssetPlugin::Unprocessed {
|
||||
source,
|
||||
watch_for_changes,
|
||||
} => {
|
||||
let source_reader = app
|
||||
.world
|
||||
.resource_mut::<AssetProviders>()
|
||||
.get_source_reader(source);
|
||||
app.insert_resource(AssetServer::new(source_reader, *watch_for_changes));
|
||||
let mut sources = app
|
||||
.world
|
||||
.get_resource_or_insert_with::<AssetSourceBuilders>(Default::default);
|
||||
sources.init_default_source(&self.file_path, &self.processed_file_path);
|
||||
embedded.register_source(&mut sources);
|
||||
}
|
||||
{
|
||||
let mut watch = cfg!(feature = "watch");
|
||||
if let Some(watch_override) = self.watch_for_changes_override {
|
||||
watch = watch_override;
|
||||
}
|
||||
match self.mode {
|
||||
AssetMode::Unprocessed => {
|
||||
let mut builders = app.world.resource_mut::<AssetSourceBuilders>();
|
||||
let sources = builders.build_sources(watch, false);
|
||||
app.insert_resource(AssetServer::new(
|
||||
sources,
|
||||
AssetServerMode::Unprocessed,
|
||||
watch,
|
||||
));
|
||||
}
|
||||
AssetPlugin::Processed {
|
||||
destination,
|
||||
watch_for_changes,
|
||||
} => {
|
||||
let destination_reader = app
|
||||
.world
|
||||
.resource_mut::<AssetProviders>()
|
||||
.get_destination_reader(destination);
|
||||
app.insert_resource(AssetServer::new(destination_reader, *watch_for_changes));
|
||||
AssetMode::Processed => {
|
||||
let mut builders = app.world.resource_mut::<AssetSourceBuilders>();
|
||||
let sources = builders.build_sources(false, watch);
|
||||
app.insert_resource(AssetServer::new(
|
||||
sources,
|
||||
AssetServerMode::Processed,
|
||||
watch,
|
||||
));
|
||||
}
|
||||
AssetPlugin::ProcessedDev {
|
||||
source,
|
||||
destination,
|
||||
watch_for_changes,
|
||||
} => {
|
||||
let mut asset_providers = app.world.resource_mut::<AssetProviders>();
|
||||
let processor = AssetProcessor::new(&mut asset_providers, source, destination);
|
||||
let destination_reader = asset_providers.get_destination_reader(source);
|
||||
// the main asset server gates loads based on asset state
|
||||
let gated_reader =
|
||||
ProcessorGatedReader::new(destination_reader, processor.data.clone());
|
||||
AssetMode::ProcessedDev => {
|
||||
let mut builders = app.world.resource_mut::<AssetSourceBuilders>();
|
||||
let processor = AssetProcessor::new(&mut builders);
|
||||
let mut sources = builders.build_sources(false, watch);
|
||||
sources.gate_on_processor(processor.data.clone());
|
||||
// the main asset server shares loaders with the processor asset server
|
||||
app.insert_resource(AssetServer::new_with_loaders(
|
||||
Box::new(gated_reader),
|
||||
sources,
|
||||
processor.server().data.loaders.clone(),
|
||||
*watch_for_changes,
|
||||
AssetServerMode::Processed,
|
||||
watch,
|
||||
))
|
||||
.insert_resource(processor)
|
||||
.add_systems(Startup, AssetProcessor::start);
|
||||
}
|
||||
}
|
||||
}
|
||||
app.init_asset::<LoadedFolder>()
|
||||
app.insert_resource(embedded)
|
||||
.init_asset::<LoadedFolder>()
|
||||
.init_asset::<()>()
|
||||
.configure_sets(
|
||||
UpdateAssets,
|
||||
|
@ -254,6 +237,12 @@ pub trait AssetApp {
|
|||
fn register_asset_loader<L: AssetLoader>(&mut self, loader: L) -> &mut Self;
|
||||
/// Registers the given `processor` in the [`App`]'s [`AssetProcessor`].
|
||||
fn register_asset_processor<P: Process>(&mut self, processor: P) -> &mut Self;
|
||||
/// Registers the given [`AssetSourceBuilder`] with the given `id`.
|
||||
fn register_asset_source(
|
||||
&mut self,
|
||||
id: impl Into<AssetSourceId<'static>>,
|
||||
source: AssetSourceBuilder,
|
||||
) -> &mut Self;
|
||||
/// Sets the default asset processor for the given `extension`.
|
||||
fn set_default_asset_processor<P: Process>(&mut self, extension: &str) -> &mut Self;
|
||||
/// Initializes the given loader in the [`App`]'s [`AssetServer`].
|
||||
|
@ -350,6 +339,21 @@ impl AssetApp for App {
|
|||
}
|
||||
self
|
||||
}
|
||||
|
||||
fn register_asset_source(
|
||||
&mut self,
|
||||
id: impl Into<AssetSourceId<'static>>,
|
||||
source: AssetSourceBuilder,
|
||||
) -> &mut Self {
|
||||
{
|
||||
let mut sources = self
|
||||
.world
|
||||
.get_resource_or_insert_with(AssetSourceBuilders::default);
|
||||
sources.insert(id, source);
|
||||
}
|
||||
|
||||
self
|
||||
}
|
||||
}
|
||||
|
||||
/// A system set that holds all "track asset" operations.
|
||||
|
@ -366,55 +370,6 @@ pub struct UpdateAssets;
|
|||
#[derive(Debug, Hash, PartialEq, Eq, Clone, ScheduleLabel)]
|
||||
pub struct AssetEvents;
|
||||
|
||||
/// Loads an "internal" asset by embedding the string stored in the given `path_str` and associates it with the given handle.
|
||||
#[macro_export]
|
||||
macro_rules! load_internal_asset {
|
||||
($app: ident, $handle: expr, $path_str: expr, $loader: expr) => {{
|
||||
let mut assets = $app.world.resource_mut::<$crate::Assets<_>>();
|
||||
assets.insert($handle, ($loader)(
|
||||
include_str!($path_str),
|
||||
std::path::Path::new(file!())
|
||||
.parent()
|
||||
.unwrap()
|
||||
.join($path_str)
|
||||
.to_string_lossy()
|
||||
));
|
||||
}};
|
||||
// we can't support params without variadic arguments, so internal assets with additional params can't be hot-reloaded
|
||||
($app: ident, $handle: ident, $path_str: expr, $loader: expr $(, $param:expr)+) => {{
|
||||
let mut assets = $app.world.resource_mut::<$crate::Assets<_>>();
|
||||
assets.insert($handle, ($loader)(
|
||||
include_str!($path_str),
|
||||
std::path::Path::new(file!())
|
||||
.parent()
|
||||
.unwrap()
|
||||
.join($path_str)
|
||||
.to_string_lossy(),
|
||||
$($param),+
|
||||
));
|
||||
}};
|
||||
}
|
||||
|
||||
/// Loads an "internal" binary asset by embedding the bytes stored in the given `path_str` and associates it with the given handle.
|
||||
#[macro_export]
|
||||
macro_rules! load_internal_binary_asset {
|
||||
($app: ident, $handle: expr, $path_str: expr, $loader: expr) => {{
|
||||
let mut assets = $app.world.resource_mut::<$crate::Assets<_>>();
|
||||
assets.insert(
|
||||
$handle,
|
||||
($loader)(
|
||||
include_bytes!($path_str).as_ref(),
|
||||
std::path::Path::new(file!())
|
||||
.parent()
|
||||
.unwrap()
|
||||
.join($path_str)
|
||||
.to_string_lossy()
|
||||
.into(),
|
||||
),
|
||||
);
|
||||
}};
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use crate::{
|
||||
|
@ -424,12 +379,11 @@ mod tests {
|
|||
io::{
|
||||
gated::{GateOpener, GatedReader},
|
||||
memory::{Dir, MemoryAssetReader},
|
||||
Reader,
|
||||
AssetSource, AssetSourceId, Reader,
|
||||
},
|
||||
loader::{AssetLoader, LoadContext},
|
||||
Asset, AssetApp, AssetEvent, AssetId, AssetPath, AssetPlugin, AssetProvider,
|
||||
AssetProviders, AssetServer, Assets, DependencyLoadState, LoadState,
|
||||
RecursiveDependencyLoadState,
|
||||
Asset, AssetApp, AssetEvent, AssetId, AssetPath, AssetPlugin, AssetServer, Assets,
|
||||
DependencyLoadState, LoadState, RecursiveDependencyLoadState,
|
||||
};
|
||||
use bevy_app::{App, Update};
|
||||
use bevy_core::TaskPoolPlugin;
|
||||
|
@ -534,17 +488,14 @@ mod tests {
|
|||
fn test_app(dir: Dir) -> (App, GateOpener) {
|
||||
let mut app = App::new();
|
||||
let (gated_memory_reader, gate_opener) = GatedReader::new(MemoryAssetReader { root: dir });
|
||||
app.insert_resource(
|
||||
AssetProviders::default()
|
||||
.with_reader("Test", move || Box::new(gated_memory_reader.clone())),
|
||||
app.register_asset_source(
|
||||
AssetSourceId::Default,
|
||||
AssetSource::build().with_reader(move || Box::new(gated_memory_reader.clone())),
|
||||
)
|
||||
.add_plugins((
|
||||
TaskPoolPlugin::default(),
|
||||
LogPlugin::default(),
|
||||
AssetPlugin::Unprocessed {
|
||||
source: AssetProvider::Custom("Test".to_string()),
|
||||
watch_for_changes: false,
|
||||
},
|
||||
AssetPlugin::default(),
|
||||
));
|
||||
(app, gate_opener)
|
||||
}
|
||||
|
|
|
@ -1,11 +1,12 @@
|
|||
use crate::{
|
||||
io::{AssetReaderError, Reader},
|
||||
io::{AssetReaderError, MissingAssetSourceError, MissingProcessedAssetReaderError, Reader},
|
||||
meta::{
|
||||
loader_settings_meta_transform, AssetHash, AssetMeta, AssetMetaDyn, ProcessedInfoMinimal,
|
||||
Settings,
|
||||
},
|
||||
path::AssetPath,
|
||||
Asset, AssetLoadError, AssetServer, Assets, Handle, UntypedAssetId, UntypedHandle,
|
||||
Asset, AssetLoadError, AssetServer, AssetServerMode, Assets, Handle, UntypedAssetId,
|
||||
UntypedHandle,
|
||||
};
|
||||
use bevy_ecs::world::World;
|
||||
use bevy_utils::{BoxedFuture, CowArc, HashMap, HashSet};
|
||||
|
@ -367,7 +368,7 @@ impl<'a> LoadContext<'a> {
|
|||
) -> Handle<A> {
|
||||
let label = label.into();
|
||||
let loaded_asset: ErasedLoadedAsset = loaded_asset.into();
|
||||
let labeled_path = self.asset_path.with_label(label.clone());
|
||||
let labeled_path = self.asset_path.clone().with_label(label.clone());
|
||||
let handle = self
|
||||
.asset_server
|
||||
.get_or_create_path_handle(labeled_path, None);
|
||||
|
@ -385,7 +386,7 @@ impl<'a> LoadContext<'a> {
|
|||
///
|
||||
/// See [`AssetPath`] for more on labeled assets.
|
||||
pub fn has_labeled_asset<'b>(&self, label: impl Into<CowArc<'b, str>>) -> bool {
|
||||
let path = self.asset_path.with_label(label.into());
|
||||
let path = self.asset_path.clone().with_label(label.into());
|
||||
self.asset_server.get_handle_untyped(&path).is_some()
|
||||
}
|
||||
|
||||
|
@ -412,15 +413,21 @@ impl<'a> LoadContext<'a> {
|
|||
}
|
||||
|
||||
/// Gets the source asset path for this load context.
|
||||
pub async fn read_asset_bytes<'b>(
|
||||
&mut self,
|
||||
path: &'b Path,
|
||||
pub async fn read_asset_bytes<'b, 'c>(
|
||||
&'b mut self,
|
||||
path: impl Into<AssetPath<'c>>,
|
||||
) -> Result<Vec<u8>, ReadAssetBytesError> {
|
||||
let mut reader = self.asset_server.reader().read(path).await?;
|
||||
let path = path.into();
|
||||
let source = self.asset_server.get_source(path.source())?;
|
||||
let asset_reader = match self.asset_server.mode() {
|
||||
AssetServerMode::Unprocessed { .. } => source.reader(),
|
||||
AssetServerMode::Processed { .. } => source.processed_reader()?,
|
||||
};
|
||||
let mut reader = asset_reader.read(path.path()).await?;
|
||||
let hash = if self.populate_hashes {
|
||||
// NOTE: ensure meta is read while the asset bytes reader is still active to ensure transactionality
|
||||
// See `ProcessorGatedReader` for more info
|
||||
let meta_bytes = self.asset_server.reader().read_meta_bytes(path).await?;
|
||||
let meta_bytes = asset_reader.read_meta_bytes(path.path()).await?;
|
||||
let minimal: ProcessedInfoMinimal = ron::de::from_bytes(&meta_bytes)
|
||||
.map_err(DeserializeMetaError::DeserializeMinimal)?;
|
||||
let processed_info = minimal
|
||||
|
@ -432,8 +439,7 @@ impl<'a> LoadContext<'a> {
|
|||
};
|
||||
let mut bytes = Vec::new();
|
||||
reader.read_to_end(&mut bytes).await?;
|
||||
self.loader_dependencies
|
||||
.insert(AssetPath::from_path(path.to_owned()), hash);
|
||||
self.loader_dependencies.insert(path.clone_owned(), hash);
|
||||
Ok(bytes)
|
||||
}
|
||||
|
||||
|
@ -480,7 +486,7 @@ impl<'a> LoadContext<'a> {
|
|||
&mut self,
|
||||
label: impl Into<CowArc<'b, str>>,
|
||||
) -> Handle<A> {
|
||||
let path = self.asset_path.with_label(label);
|
||||
let path = self.asset_path.clone().with_label(label);
|
||||
let handle = self.asset_server.get_or_create_path_handle::<A>(path, None);
|
||||
self.dependencies.insert(handle.id().untyped());
|
||||
handle
|
||||
|
@ -542,6 +548,10 @@ pub enum ReadAssetBytesError {
|
|||
DeserializeMetaError(#[from] DeserializeMetaError),
|
||||
#[error(transparent)]
|
||||
AssetReaderError(#[from] AssetReaderError),
|
||||
#[error(transparent)]
|
||||
MissingAssetSourceError(#[from] MissingAssetSourceError),
|
||||
#[error(transparent)]
|
||||
MissingProcessedAssetReaderError(#[from] MissingProcessedAssetReaderError),
|
||||
/// Encountered an I/O error while loading an asset.
|
||||
#[error("Encountered an io error while loading asset: {0}")]
|
||||
Io(#[from] std::io::Error),
|
||||
|
|
|
@ -1,3 +1,4 @@
|
|||
use crate::io::AssetSourceId;
|
||||
use bevy_reflect::{
|
||||
std_traits::ReflectDefault, utility::NonGenericTypeInfoCell, FromReflect, FromType,
|
||||
GetTypeRegistration, Reflect, ReflectDeserialize, ReflectFromPtr, ReflectFromReflect,
|
||||
|
@ -12,10 +13,13 @@ use std::{
|
|||
ops::Deref,
|
||||
path::{Path, PathBuf},
|
||||
};
|
||||
use thiserror::Error;
|
||||
|
||||
/// Represents a path to an asset in a "virtual filesystem".
|
||||
///
|
||||
/// Asset paths consist of two main parts:
|
||||
/// Asset paths consist of three main parts:
|
||||
/// * [`AssetPath::source`]: The name of the [`AssetSource`](crate::io::AssetSource) to load the asset from.
|
||||
/// This is optional. If one is not set the default source will be used (which is the `assets` folder by default).
|
||||
/// * [`AssetPath::path`]: The "virtual filesystem path" pointing to an asset source file.
|
||||
/// * [`AssetPath::label`]: An optional "named sub asset". When assets are loaded, they are
|
||||
/// allowed to load "sub assets" of any type, which are identified by a named "label".
|
||||
|
@ -33,20 +37,24 @@ use std::{
|
|||
/// # struct Scene;
|
||||
/// #
|
||||
/// # let asset_server: AssetServer = panic!();
|
||||
/// // This loads the `my_scene.scn` base asset.
|
||||
/// // This loads the `my_scene.scn` base asset from the default asset source.
|
||||
/// let scene: Handle<Scene> = asset_server.load("my_scene.scn");
|
||||
///
|
||||
/// // This loads the `PlayerMesh` labeled asset from the `my_scene.scn` base asset.
|
||||
/// // This loads the `PlayerMesh` labeled asset from the `my_scene.scn` base asset in the default asset source.
|
||||
/// let mesh: Handle<Mesh> = asset_server.load("my_scene.scn#PlayerMesh");
|
||||
///
|
||||
/// // This loads the `my_scene.scn` base asset from a custom 'remote' asset source.
|
||||
/// let scene: Handle<Scene> = asset_server.load("remote://my_scene.scn");
|
||||
/// ```
|
||||
///
|
||||
/// [`AssetPath`] implements [`From`] for `&'static str`, `&'static Path`, and `&'a String`,
|
||||
/// which allows us to optimize the static cases.
|
||||
/// This means that the common case of `asset_server.load("my_scene.scn")` when it creates and
|
||||
/// clones internal owned [`AssetPaths`](AssetPath).
|
||||
/// This also means that you should use [`AssetPath::new`] in cases where `&str` is the explicit type.
|
||||
/// This also means that you should use [`AssetPath::parse`] in cases where `&str` is the explicit type.
|
||||
#[derive(Eq, PartialEq, Hash, Clone, Default)]
|
||||
pub struct AssetPath<'a> {
|
||||
source: AssetSourceId<'a>,
|
||||
path: CowArc<'a, Path>,
|
||||
label: Option<CowArc<'a, str>>,
|
||||
}
|
||||
|
@ -67,38 +75,128 @@ impl<'a> Display for AssetPath<'a> {
|
|||
}
|
||||
}
|
||||
|
||||
#[derive(Error, Debug, PartialEq, Eq)]
|
||||
pub enum ParseAssetPathError {
|
||||
#[error("Asset source must be followed by '://'")]
|
||||
InvalidSourceSyntax,
|
||||
#[error("Asset source must be at least one character. Either specify the source before the '://' or remove the `://`")]
|
||||
MissingSource,
|
||||
#[error("Asset label must be at least one character. Either specify the label after the '#' or remove the '#'")]
|
||||
MissingLabel,
|
||||
}
|
||||
|
||||
impl<'a> AssetPath<'a> {
|
||||
/// Creates a new [`AssetPath`] from a string in the asset path format:
|
||||
/// * An asset at the root: `"scene.gltf"`
|
||||
/// * An asset nested in some folders: `"some/path/scene.gltf"`
|
||||
/// * An asset with a "label": `"some/path/scene.gltf#Mesh0"`
|
||||
/// * An asset with a custom "source": `"custom://some/path/scene.gltf#Mesh0"`
|
||||
///
|
||||
/// Prefer [`From<'static str>`] for static strings, as this will prevent allocations
|
||||
/// and reference counting for [`AssetPath::into_owned`].
|
||||
pub fn new(asset_path: &'a str) -> AssetPath<'a> {
|
||||
let (path, label) = Self::get_parts(asset_path);
|
||||
Self {
|
||||
path: CowArc::Borrowed(path),
|
||||
label: label.map(CowArc::Borrowed),
|
||||
}
|
||||
///
|
||||
/// # Panics
|
||||
/// Panics if the asset path is in an invalid format. Use [`AssetPath::try_parse`] for a fallible variant
|
||||
pub fn parse(asset_path: &'a str) -> AssetPath<'a> {
|
||||
Self::try_parse(asset_path).unwrap()
|
||||
}
|
||||
|
||||
fn get_parts(asset_path: &str) -> (&Path, Option<&str>) {
|
||||
let mut parts = asset_path.splitn(2, '#');
|
||||
let path = Path::new(parts.next().expect("Path must be set."));
|
||||
let label = parts.next();
|
||||
(path, label)
|
||||
/// Creates a new [`AssetPath`] from a string in the asset path format:
|
||||
/// * An asset at the root: `"scene.gltf"`
|
||||
/// * An asset nested in some folders: `"some/path/scene.gltf"`
|
||||
/// * An asset with a "label": `"some/path/scene.gltf#Mesh0"`
|
||||
/// * An asset with a custom "source": `"custom://some/path/scene.gltf#Mesh0"`
|
||||
///
|
||||
/// Prefer [`From<'static str>`] for static strings, as this will prevent allocations
|
||||
/// and reference counting for [`AssetPath::into_owned`].
|
||||
///
|
||||
/// This will return a [`ParseAssetPathError`] if `asset_path` is in an invalid format.
|
||||
pub fn try_parse(asset_path: &'a str) -> Result<AssetPath<'a>, ParseAssetPathError> {
|
||||
let (source, path, label) = Self::parse_internal(asset_path).unwrap();
|
||||
Ok(Self {
|
||||
source: match source {
|
||||
Some(source) => AssetSourceId::Name(CowArc::Borrowed(source)),
|
||||
None => AssetSourceId::Default,
|
||||
},
|
||||
path: CowArc::Borrowed(path),
|
||||
label: label.map(CowArc::Borrowed),
|
||||
})
|
||||
}
|
||||
|
||||
fn parse_internal(
|
||||
asset_path: &str,
|
||||
) -> Result<(Option<&str>, &Path, Option<&str>), ParseAssetPathError> {
|
||||
let mut chars = asset_path.char_indices();
|
||||
let mut source_range = None;
|
||||
let mut path_range = 0..asset_path.len();
|
||||
let mut label_range = None;
|
||||
while let Some((index, char)) = chars.next() {
|
||||
match char {
|
||||
':' => {
|
||||
let (_, char) = chars
|
||||
.next()
|
||||
.ok_or(ParseAssetPathError::InvalidSourceSyntax)?;
|
||||
if char != '/' {
|
||||
return Err(ParseAssetPathError::InvalidSourceSyntax);
|
||||
}
|
||||
let (index, char) = chars
|
||||
.next()
|
||||
.ok_or(ParseAssetPathError::InvalidSourceSyntax)?;
|
||||
if char != '/' {
|
||||
return Err(ParseAssetPathError::InvalidSourceSyntax);
|
||||
}
|
||||
source_range = Some(0..index - 2);
|
||||
path_range.start = index + 1;
|
||||
}
|
||||
'#' => {
|
||||
path_range.end = index;
|
||||
label_range = Some(index + 1..asset_path.len());
|
||||
break;
|
||||
}
|
||||
_ => {}
|
||||
}
|
||||
}
|
||||
|
||||
let source = match source_range {
|
||||
Some(source_range) => {
|
||||
if source_range.is_empty() {
|
||||
return Err(ParseAssetPathError::MissingSource);
|
||||
}
|
||||
Some(&asset_path[source_range])
|
||||
}
|
||||
None => None,
|
||||
};
|
||||
let label = match label_range {
|
||||
Some(label_range) => {
|
||||
if label_range.is_empty() {
|
||||
return Err(ParseAssetPathError::MissingLabel);
|
||||
}
|
||||
Some(&asset_path[label_range])
|
||||
}
|
||||
None => None,
|
||||
};
|
||||
|
||||
let path = Path::new(&asset_path[path_range]);
|
||||
Ok((source, path, label))
|
||||
}
|
||||
|
||||
/// Creates a new [`AssetPath`] from a [`Path`].
|
||||
#[inline]
|
||||
pub fn from_path(path: impl Into<CowArc<'a, Path>>) -> AssetPath<'a> {
|
||||
pub fn from_path(path: &'a Path) -> AssetPath<'a> {
|
||||
AssetPath {
|
||||
path: path.into(),
|
||||
path: CowArc::Borrowed(path),
|
||||
source: AssetSourceId::Default,
|
||||
label: None,
|
||||
}
|
||||
}
|
||||
|
||||
/// Gets the "asset source", if one was defined. If none was defined, the default source
|
||||
/// will be used.
|
||||
#[inline]
|
||||
pub fn source(&self) -> &AssetSourceId {
|
||||
&self.source
|
||||
}
|
||||
|
||||
/// Gets the "sub-asset label".
|
||||
#[inline]
|
||||
pub fn label(&self) -> Option<&str> {
|
||||
|
@ -115,6 +213,7 @@ impl<'a> AssetPath<'a> {
|
|||
#[inline]
|
||||
pub fn without_label(&self) -> AssetPath<'_> {
|
||||
Self {
|
||||
source: self.source.clone(),
|
||||
path: self.path.clone(),
|
||||
label: None,
|
||||
}
|
||||
|
@ -135,24 +234,62 @@ impl<'a> AssetPath<'a> {
|
|||
/// Returns this asset path with the given label. This will replace the previous
|
||||
/// label if it exists.
|
||||
#[inline]
|
||||
pub fn with_label(&self, label: impl Into<CowArc<'a, str>>) -> AssetPath<'a> {
|
||||
pub fn with_label(self, label: impl Into<CowArc<'a, str>>) -> AssetPath<'a> {
|
||||
AssetPath {
|
||||
path: self.path.clone(),
|
||||
source: self.source,
|
||||
path: self.path,
|
||||
label: Some(label.into()),
|
||||
}
|
||||
}
|
||||
|
||||
/// Returns this asset path with the given asset source. This will replace the previous asset
|
||||
/// source if it exists.
|
||||
#[inline]
|
||||
pub fn with_source(self, source: impl Into<AssetSourceId<'a>>) -> AssetPath<'a> {
|
||||
AssetPath {
|
||||
source: source.into(),
|
||||
path: self.path,
|
||||
label: self.label,
|
||||
}
|
||||
}
|
||||
|
||||
/// Returns an [`AssetPath`] for the parent folder of this path, if there is a parent folder in the path.
|
||||
pub fn parent(&self) -> Option<AssetPath<'a>> {
|
||||
let path = match &self.path {
|
||||
CowArc::Borrowed(path) => CowArc::Borrowed(path.parent()?),
|
||||
CowArc::Static(path) => CowArc::Static(path.parent()?),
|
||||
CowArc::Owned(path) => path.parent()?.to_path_buf().into(),
|
||||
};
|
||||
Some(AssetPath {
|
||||
source: self.source.clone(),
|
||||
label: None,
|
||||
path,
|
||||
})
|
||||
}
|
||||
|
||||
/// Converts this into an "owned" value. If internally a value is borrowed, it will be cloned into an "owned [`Arc`]".
|
||||
/// If it is already an "owned [`Arc`]", it will remain unchanged.
|
||||
/// If internally a value is a static reference, the static reference will be used unchanged.
|
||||
/// If internally a value is an "owned [`Arc`]", it will remain unchanged.
|
||||
///
|
||||
/// [`Arc`]: std::sync::Arc
|
||||
pub fn into_owned(self) -> AssetPath<'static> {
|
||||
AssetPath {
|
||||
source: self.source.into_owned(),
|
||||
path: self.path.into_owned(),
|
||||
label: self.label.map(|l| l.into_owned()),
|
||||
}
|
||||
}
|
||||
|
||||
/// Clones this into an "owned" value. If internally a value is borrowed, it will be cloned into an "owned [`Arc`]".
|
||||
/// If internally a value is a static reference, the static reference will be used unchanged.
|
||||
/// If internally a value is an "owned [`Arc`]", the [`Arc`] will be cloned.
|
||||
///
|
||||
/// [`Arc`]: std::sync::Arc
|
||||
#[inline]
|
||||
pub fn clone_owned(&self) -> AssetPath<'static> {
|
||||
self.clone().into_owned()
|
||||
}
|
||||
|
||||
/// Returns the full extension (including multiple '.' values).
|
||||
/// Ex: Returns `"config.ron"` for `"my_asset.config.ron"`
|
||||
pub fn get_full_extension(&self) -> Option<String> {
|
||||
|
@ -176,8 +313,9 @@ impl<'a> AssetPath<'a> {
|
|||
impl From<&'static str> for AssetPath<'static> {
|
||||
#[inline]
|
||||
fn from(asset_path: &'static str) -> Self {
|
||||
let (path, label) = Self::get_parts(asset_path);
|
||||
let (source, path, label) = Self::parse_internal(asset_path).unwrap();
|
||||
AssetPath {
|
||||
source: source.into(),
|
||||
path: CowArc::Static(path),
|
||||
label: label.map(CowArc::Static),
|
||||
}
|
||||
|
@ -187,14 +325,14 @@ impl From<&'static str> for AssetPath<'static> {
|
|||
impl<'a> From<&'a String> for AssetPath<'a> {
|
||||
#[inline]
|
||||
fn from(asset_path: &'a String) -> Self {
|
||||
AssetPath::new(asset_path.as_str())
|
||||
AssetPath::parse(asset_path.as_str())
|
||||
}
|
||||
}
|
||||
|
||||
impl From<String> for AssetPath<'static> {
|
||||
#[inline]
|
||||
fn from(asset_path: String) -> Self {
|
||||
AssetPath::new(asset_path.as_str()).into_owned()
|
||||
AssetPath::parse(asset_path.as_str()).into_owned()
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -202,6 +340,7 @@ impl From<&'static Path> for AssetPath<'static> {
|
|||
#[inline]
|
||||
fn from(path: &'static Path) -> Self {
|
||||
Self {
|
||||
source: AssetSourceId::Default,
|
||||
path: CowArc::Static(path),
|
||||
label: None,
|
||||
}
|
||||
|
@ -212,6 +351,7 @@ impl From<PathBuf> for AssetPath<'static> {
|
|||
#[inline]
|
||||
fn from(path: PathBuf) -> Self {
|
||||
Self {
|
||||
source: AssetSourceId::Default,
|
||||
path: path.into(),
|
||||
label: None,
|
||||
}
|
||||
|
@ -261,7 +401,7 @@ impl<'de> Visitor<'de> for AssetPathVisitor {
|
|||
where
|
||||
E: serde::de::Error,
|
||||
{
|
||||
Ok(AssetPath::new(v).into_owned())
|
||||
Ok(AssetPath::parse(v).into_owned())
|
||||
}
|
||||
|
||||
fn visit_string<E>(self, v: String) -> Result<Self::Value, E>
|
||||
|
@ -402,3 +542,36 @@ impl FromReflect for AssetPath<'static> {
|
|||
>(<dyn Reflect>::as_any(reflect))?))
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use crate::AssetPath;
|
||||
use std::path::Path;
|
||||
|
||||
#[test]
|
||||
fn parse_asset_path() {
|
||||
let result = AssetPath::parse_internal("a/b.test");
|
||||
assert_eq!(result, Ok((None, Path::new("a/b.test"), None)));
|
||||
|
||||
let result = AssetPath::parse_internal("http://a/b.test");
|
||||
assert_eq!(result, Ok((Some("http"), Path::new("a/b.test"), None)));
|
||||
|
||||
let result = AssetPath::parse_internal("http://a/b.test#Foo");
|
||||
assert_eq!(
|
||||
result,
|
||||
Ok((Some("http"), Path::new("a/b.test"), Some("Foo")))
|
||||
);
|
||||
|
||||
let result = AssetPath::parse_internal("http://");
|
||||
assert_eq!(result, Ok((Some("http"), Path::new(""), None)));
|
||||
|
||||
let result = AssetPath::parse_internal("://x");
|
||||
assert_eq!(result, Err(crate::ParseAssetPathError::MissingSource));
|
||||
|
||||
let result = AssetPath::parse_internal("a/b.test#");
|
||||
assert_eq!(result, Err(crate::ParseAssetPathError::MissingLabel));
|
||||
|
||||
let result = AssetPath::parse_internal("http:/");
|
||||
assert_eq!(result, Err(crate::ParseAssetPathError::InvalidSourceSyntax));
|
||||
}
|
||||
}
|
||||
|
|
|
@ -1,15 +1,16 @@
|
|||
use crate::AssetPath;
|
||||
use async_fs::File;
|
||||
use bevy_log::error;
|
||||
use bevy_utils::HashSet;
|
||||
use futures_lite::{AsyncReadExt, AsyncWriteExt};
|
||||
use std::path::{Path, PathBuf};
|
||||
use std::path::PathBuf;
|
||||
use thiserror::Error;
|
||||
|
||||
/// An in-memory representation of a single [`ProcessorTransactionLog`] entry.
|
||||
#[derive(Debug)]
|
||||
pub(crate) enum LogEntry {
|
||||
BeginProcessing(PathBuf),
|
||||
EndProcessing(PathBuf),
|
||||
BeginProcessing(AssetPath<'static>),
|
||||
EndProcessing(AssetPath<'static>),
|
||||
UnrecoverableError,
|
||||
}
|
||||
|
||||
|
@ -55,12 +56,12 @@ pub enum ValidateLogError {
|
|||
/// An error that occurs when validating individual [`ProcessorTransactionLog`] entries.
|
||||
#[derive(Error, Debug)]
|
||||
pub enum LogEntryError {
|
||||
#[error("Encountered a duplicate process asset transaction: {0:?}")]
|
||||
DuplicateTransaction(PathBuf),
|
||||
#[error("A transaction was ended that never started {0:?}")]
|
||||
EndedMissingTransaction(PathBuf),
|
||||
#[error("An asset started processing but never finished: {0:?}")]
|
||||
UnfinishedTransaction(PathBuf),
|
||||
#[error("Encountered a duplicate process asset transaction: {0}")]
|
||||
DuplicateTransaction(AssetPath<'static>),
|
||||
#[error("A transaction was ended that never started {0}")]
|
||||
EndedMissingTransaction(AssetPath<'static>),
|
||||
#[error("An asset started processing but never finished: {0}")]
|
||||
UnfinishedTransaction(AssetPath<'static>),
|
||||
}
|
||||
|
||||
const LOG_PATH: &str = "imported_assets/log";
|
||||
|
@ -114,9 +115,13 @@ impl ProcessorTransactionLog {
|
|||
file.read_to_string(&mut string).await?;
|
||||
for line in string.lines() {
|
||||
if let Some(path_str) = line.strip_prefix(ENTRY_BEGIN) {
|
||||
log_lines.push(LogEntry::BeginProcessing(PathBuf::from(path_str)));
|
||||
log_lines.push(LogEntry::BeginProcessing(
|
||||
AssetPath::parse(path_str).into_owned(),
|
||||
));
|
||||
} else if let Some(path_str) = line.strip_prefix(ENTRY_END) {
|
||||
log_lines.push(LogEntry::EndProcessing(PathBuf::from(path_str)));
|
||||
log_lines.push(LogEntry::EndProcessing(
|
||||
AssetPath::parse(path_str).into_owned(),
|
||||
));
|
||||
} else if line.is_empty() {
|
||||
continue;
|
||||
} else {
|
||||
|
@ -127,7 +132,7 @@ impl ProcessorTransactionLog {
|
|||
}
|
||||
|
||||
pub(crate) async fn validate() -> Result<(), ValidateLogError> {
|
||||
let mut transactions: HashSet<PathBuf> = Default::default();
|
||||
let mut transactions: HashSet<AssetPath<'static>> = Default::default();
|
||||
let mut errors: Vec<LogEntryError> = Vec::new();
|
||||
let entries = Self::read().await?;
|
||||
for entry in entries {
|
||||
|
@ -160,21 +165,27 @@ impl ProcessorTransactionLog {
|
|||
|
||||
/// Logs the start of an asset being processed. If this is not followed at some point in the log by a closing [`ProcessorTransactionLog::end_processing`],
|
||||
/// in the next run of the processor the asset processing will be considered "incomplete" and it will be reprocessed.
|
||||
pub(crate) async fn begin_processing(&mut self, path: &Path) -> Result<(), WriteLogError> {
|
||||
self.write(&format!("{ENTRY_BEGIN}{}\n", path.to_string_lossy()))
|
||||
pub(crate) async fn begin_processing(
|
||||
&mut self,
|
||||
path: &AssetPath<'_>,
|
||||
) -> Result<(), WriteLogError> {
|
||||
self.write(&format!("{ENTRY_BEGIN}{path}\n"))
|
||||
.await
|
||||
.map_err(|e| WriteLogError {
|
||||
log_entry: LogEntry::BeginProcessing(path.to_owned()),
|
||||
log_entry: LogEntry::BeginProcessing(path.clone_owned()),
|
||||
error: e,
|
||||
})
|
||||
}
|
||||
|
||||
/// Logs the end of an asset being successfully processed. See [`ProcessorTransactionLog::begin_processing`].
|
||||
pub(crate) async fn end_processing(&mut self, path: &Path) -> Result<(), WriteLogError> {
|
||||
self.write(&format!("{ENTRY_END}{}\n", path.to_string_lossy()))
|
||||
pub(crate) async fn end_processing(
|
||||
&mut self,
|
||||
path: &AssetPath<'_>,
|
||||
) -> Result<(), WriteLogError> {
|
||||
self.write(&format!("{ENTRY_END}{path}\n"))
|
||||
.await
|
||||
.map_err(|e| WriteLogError {
|
||||
log_entry: LogEntry::EndProcessing(path.to_owned()),
|
||||
log_entry: LogEntry::EndProcessing(path.clone_owned()),
|
||||
error: e,
|
||||
})
|
||||
}
|
||||
|
|
|
@ -6,15 +6,15 @@ pub use process::*;
|
|||
|
||||
use crate::{
|
||||
io::{
|
||||
processor_gated::ProcessorGatedReader, AssetProvider, AssetProviders, AssetReader,
|
||||
AssetReaderError, AssetSourceEvent, AssetWatcher, AssetWriter, AssetWriterError,
|
||||
AssetReader, AssetReaderError, AssetSource, AssetSourceBuilders, AssetSourceEvent,
|
||||
AssetSourceId, AssetSources, AssetWriter, AssetWriterError, MissingAssetSourceError,
|
||||
},
|
||||
meta::{
|
||||
get_asset_hash, get_full_asset_hash, AssetAction, AssetActionMinimal, AssetHash, AssetMeta,
|
||||
AssetMetaDyn, AssetMetaMinimal, ProcessedInfo, ProcessedInfoMinimal,
|
||||
},
|
||||
AssetLoadError, AssetPath, AssetServer, DeserializeMetaError,
|
||||
MissingAssetLoaderForExtensionError, CANNOT_WATCH_ERROR_MESSAGE,
|
||||
AssetLoadError, AssetPath, AssetServer, AssetServerMode, DeserializeMetaError,
|
||||
MissingAssetLoaderForExtensionError,
|
||||
};
|
||||
use bevy_ecs::prelude::*;
|
||||
use bevy_log::{debug, error, trace, warn};
|
||||
|
@ -30,10 +30,10 @@ use std::{
|
|||
};
|
||||
use thiserror::Error;
|
||||
|
||||
/// A "background" asset processor that reads asset values from a source [`AssetProvider`] (which corresponds to an [`AssetReader`] / [`AssetWriter`] pair),
|
||||
/// processes them in some way, and writes them to a destination [`AssetProvider`].
|
||||
/// A "background" asset processor that reads asset values from a source [`AssetSource`] (which corresponds to an [`AssetReader`] / [`AssetWriter`] pair),
|
||||
/// processes them in some way, and writes them to a destination [`AssetSource`].
|
||||
///
|
||||
/// This will create .meta files (a human-editable serialized form of [`AssetMeta`]) in the source [`AssetProvider`] for assets that
|
||||
/// This will create .meta files (a human-editable serialized form of [`AssetMeta`]) in the source [`AssetSource`] for assets that
|
||||
/// that can be loaded and/or processed. This enables developers to configure how each asset should be loaded and/or processed.
|
||||
///
|
||||
/// [`AssetProcessor`] can be run in the background while a Bevy App is running. Changes to assets will be automatically detected and hot-reloaded.
|
||||
|
@ -58,37 +58,21 @@ pub struct AssetProcessorData {
|
|||
/// Default processors for file extensions
|
||||
default_processors: RwLock<HashMap<String, &'static str>>,
|
||||
state: async_lock::RwLock<ProcessorState>,
|
||||
source_reader: Box<dyn AssetReader>,
|
||||
source_writer: Box<dyn AssetWriter>,
|
||||
destination_reader: Box<dyn AssetReader>,
|
||||
destination_writer: Box<dyn AssetWriter>,
|
||||
sources: AssetSources,
|
||||
initialized_sender: async_broadcast::Sender<()>,
|
||||
initialized_receiver: async_broadcast::Receiver<()>,
|
||||
finished_sender: async_broadcast::Sender<()>,
|
||||
finished_receiver: async_broadcast::Receiver<()>,
|
||||
source_event_receiver: crossbeam_channel::Receiver<AssetSourceEvent>,
|
||||
_source_watcher: Option<Box<dyn AssetWatcher>>,
|
||||
}
|
||||
|
||||
impl AssetProcessor {
|
||||
/// Creates a new [`AssetProcessor`] instance.
|
||||
pub fn new(
|
||||
providers: &mut AssetProviders,
|
||||
source: &AssetProvider,
|
||||
destination: &AssetProvider,
|
||||
) -> Self {
|
||||
let data = Arc::new(AssetProcessorData::new(
|
||||
providers.get_source_reader(source),
|
||||
providers.get_source_writer(source),
|
||||
providers.get_destination_reader(destination),
|
||||
providers.get_destination_writer(destination),
|
||||
));
|
||||
let destination_reader = providers.get_destination_reader(destination);
|
||||
pub fn new(source: &mut AssetSourceBuilders) -> Self {
|
||||
let data = Arc::new(AssetProcessorData::new(source.build_sources(true, false)));
|
||||
// The asset processor uses its own asset server with its own id space
|
||||
let server = AssetServer::new(
|
||||
Box::new(ProcessorGatedReader::new(destination_reader, data.clone())),
|
||||
true,
|
||||
);
|
||||
let mut sources = source.build_sources(false, false);
|
||||
sources.gate_on_processor(data.clone());
|
||||
let server = AssetServer::new(sources, AssetServerMode::Processed, false);
|
||||
Self { server, data }
|
||||
}
|
||||
|
||||
|
@ -114,24 +98,18 @@ impl AssetProcessor {
|
|||
*self.data.state.read().await
|
||||
}
|
||||
|
||||
/// Retrieves the "source" [`AssetReader`] (the place where user-provided unprocessed "asset sources" are stored)
|
||||
pub fn source_reader(&self) -> &dyn AssetReader {
|
||||
&*self.data.source_reader
|
||||
/// Retrieves the [`AssetSource`] for this processor
|
||||
#[inline]
|
||||
pub fn get_source<'a, 'b>(
|
||||
&'a self,
|
||||
id: impl Into<AssetSourceId<'b>>,
|
||||
) -> Result<&'a AssetSource, MissingAssetSourceError> {
|
||||
self.data.sources.get(id.into())
|
||||
}
|
||||
|
||||
/// Retrieves the "source" [`AssetWriter`] (the place where user-provided unprocessed "asset sources" are stored)
|
||||
pub fn source_writer(&self) -> &dyn AssetWriter {
|
||||
&*self.data.source_writer
|
||||
}
|
||||
|
||||
/// Retrieves the "destination" [`AssetReader`] (the place where processed / [`AssetProcessor`]-managed assets are stored)
|
||||
pub fn destination_reader(&self) -> &dyn AssetReader {
|
||||
&*self.data.destination_reader
|
||||
}
|
||||
|
||||
/// Retrieves the "destination" [`AssetWriter`] (the place where processed / [`AssetProcessor`]-managed assets are stored)
|
||||
pub fn destination_writer(&self) -> &dyn AssetWriter {
|
||||
&*self.data.destination_writer
|
||||
#[inline]
|
||||
pub fn sources(&self) -> &AssetSources {
|
||||
&self.data.sources
|
||||
}
|
||||
|
||||
/// Logs an unrecoverable error. On the next run of the processor, all assets will be regenerated. This should only be used as a last resort.
|
||||
|
@ -144,14 +122,14 @@ impl AssetProcessor {
|
|||
|
||||
/// Logs the start of an asset being processed. If this is not followed at some point in the log by a closing [`AssetProcessor::log_end_processing`],
|
||||
/// in the next run of the processor the asset processing will be considered "incomplete" and it will be reprocessed.
|
||||
async fn log_begin_processing(&self, path: &Path) {
|
||||
async fn log_begin_processing(&self, path: &AssetPath<'_>) {
|
||||
let mut log = self.data.log.write().await;
|
||||
let log = log.as_mut().unwrap();
|
||||
log.begin_processing(path).await.unwrap();
|
||||
}
|
||||
|
||||
/// Logs the end of an asset being successfully processed. See [`AssetProcessor::log_begin_processing`].
|
||||
async fn log_end_processing(&self, path: &Path) {
|
||||
async fn log_end_processing(&self, path: &AssetPath<'_>) {
|
||||
let mut log = self.data.log.write().await;
|
||||
let log = log.as_mut().unwrap();
|
||||
log.end_processing(path).await.unwrap();
|
||||
|
@ -172,10 +150,11 @@ impl AssetProcessor {
|
|||
}
|
||||
|
||||
/// Processes all assets. This will:
|
||||
/// * For each "processed [`AssetSource`]:
|
||||
/// * Scan the [`ProcessorTransactionLog`] and recover from any failures detected
|
||||
/// * Scan the destination [`AssetProvider`] to build the current view of already processed assets.
|
||||
/// * Scan the source [`AssetProvider`] and remove any processed "destination" assets that are invalid or no longer exist.
|
||||
/// * For each asset in the `source` [`AssetProvider`], kick off a new "process job", which will process the asset
|
||||
/// * Scan the processed [`AssetReader`] to build the current view of already processed assets.
|
||||
/// * Scan the unprocessed [`AssetReader`] and remove any final processed assets that are invalid or no longer exist.
|
||||
/// * For each asset in the unprocessed [`AssetReader`], kick off a new "process job", which will process the asset
|
||||
/// (if the latest version of the asset has not been processed).
|
||||
#[cfg(all(not(target_arch = "wasm32"), feature = "multi-threaded"))]
|
||||
pub fn process_assets(&self) {
|
||||
|
@ -184,8 +163,11 @@ impl AssetProcessor {
|
|||
IoTaskPool::get().scope(|scope| {
|
||||
scope.spawn(async move {
|
||||
self.initialize().await.unwrap();
|
||||
let path = PathBuf::from("");
|
||||
self.process_assets_internal(scope, path).await.unwrap();
|
||||
for source in self.sources().iter_processed() {
|
||||
self.process_assets_internal(scope, source, PathBuf::from(""))
|
||||
.await
|
||||
.unwrap();
|
||||
}
|
||||
});
|
||||
});
|
||||
// This must happen _after_ the scope resolves or it will happen "too early"
|
||||
|
@ -195,20 +177,24 @@ impl AssetProcessor {
|
|||
debug!("Processing finished in {:?}", end_time - start_time);
|
||||
}
|
||||
|
||||
/// Listens for changes to assets in the source [`AssetProvider`] and update state accordingly.
|
||||
/// Listens for changes to assets in the source [`AssetSource`] and update state accordingly.
|
||||
// PERF: parallelize change event processing
|
||||
pub async fn listen_for_source_change_events(&self) {
|
||||
debug!("Listening for changes to source assets");
|
||||
loop {
|
||||
let mut started_processing = false;
|
||||
|
||||
for event in self.data.source_event_receiver.try_iter() {
|
||||
if !started_processing {
|
||||
self.set_state(ProcessorState::Processing).await;
|
||||
started_processing = true;
|
||||
}
|
||||
for source in self.data.sources.iter_processed() {
|
||||
if let Some(receiver) = source.event_receiver() {
|
||||
for event in receiver.try_iter() {
|
||||
if !started_processing {
|
||||
self.set_state(ProcessorState::Processing).await;
|
||||
started_processing = true;
|
||||
}
|
||||
|
||||
self.handle_asset_source_event(event).await;
|
||||
self.handle_asset_source_event(source, event).await;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if started_processing {
|
||||
|
@ -217,84 +203,91 @@ impl AssetProcessor {
|
|||
}
|
||||
}
|
||||
|
||||
async fn handle_asset_source_event(&self, event: AssetSourceEvent) {
|
||||
async fn handle_asset_source_event(&self, source: &AssetSource, event: AssetSourceEvent) {
|
||||
trace!("{event:?}");
|
||||
match event {
|
||||
AssetSourceEvent::AddedAsset(path)
|
||||
| AssetSourceEvent::AddedMeta(path)
|
||||
| AssetSourceEvent::ModifiedAsset(path)
|
||||
| AssetSourceEvent::ModifiedMeta(path) => {
|
||||
self.process_asset(&path).await;
|
||||
self.process_asset(source, path).await;
|
||||
}
|
||||
AssetSourceEvent::RemovedAsset(path) => {
|
||||
self.handle_removed_asset(path).await;
|
||||
self.handle_removed_asset(source, path).await;
|
||||
}
|
||||
AssetSourceEvent::RemovedMeta(path) => {
|
||||
self.handle_removed_meta(&path).await;
|
||||
self.handle_removed_meta(source, path).await;
|
||||
}
|
||||
AssetSourceEvent::AddedFolder(path) => {
|
||||
self.handle_added_folder(path).await;
|
||||
self.handle_added_folder(source, path).await;
|
||||
}
|
||||
// NOTE: As a heads up for future devs: this event shouldn't be run in parallel with other events that might
|
||||
// touch this folder (ex: the folder might be re-created with new assets). Clean up the old state first.
|
||||
// Currently this event handler is not parallel, but it could be (and likely should be) in the future.
|
||||
AssetSourceEvent::RemovedFolder(path) => {
|
||||
self.handle_removed_folder(&path).await;
|
||||
self.handle_removed_folder(source, &path).await;
|
||||
}
|
||||
AssetSourceEvent::RenamedAsset { old, new } => {
|
||||
// If there was a rename event, but the path hasn't changed, this asset might need reprocessing.
|
||||
// Sometimes this event is returned when an asset is moved "back" into the asset folder
|
||||
if old == new {
|
||||
self.process_asset(&new).await;
|
||||
self.process_asset(source, new).await;
|
||||
} else {
|
||||
self.handle_renamed_asset(old, new).await;
|
||||
self.handle_renamed_asset(source, old, new).await;
|
||||
}
|
||||
}
|
||||
AssetSourceEvent::RenamedMeta { old, new } => {
|
||||
// If there was a rename event, but the path hasn't changed, this asset meta might need reprocessing.
|
||||
// Sometimes this event is returned when an asset meta is moved "back" into the asset folder
|
||||
if old == new {
|
||||
self.process_asset(&new).await;
|
||||
self.process_asset(source, new).await;
|
||||
} else {
|
||||
debug!("Meta renamed from {old:?} to {new:?}");
|
||||
let mut infos = self.data.asset_infos.write().await;
|
||||
// Renaming meta should not assume that an asset has also been renamed. Check both old and new assets to see
|
||||
// if they should be re-imported (and/or have new meta generated)
|
||||
infos.check_reprocess_queue.push_back(old);
|
||||
infos.check_reprocess_queue.push_back(new);
|
||||
let new_asset_path = AssetPath::from(new).with_source(source.id());
|
||||
let old_asset_path = AssetPath::from(old).with_source(source.id());
|
||||
infos.check_reprocess_queue.push_back(old_asset_path);
|
||||
infos.check_reprocess_queue.push_back(new_asset_path);
|
||||
}
|
||||
}
|
||||
AssetSourceEvent::RenamedFolder { old, new } => {
|
||||
// If there was a rename event, but the path hasn't changed, this asset folder might need reprocessing.
|
||||
// Sometimes this event is returned when an asset meta is moved "back" into the asset folder
|
||||
if old == new {
|
||||
self.handle_added_folder(new).await;
|
||||
self.handle_added_folder(source, new).await;
|
||||
} else {
|
||||
// PERF: this reprocesses everything in the moved folder. this is not necessary in most cases, but
|
||||
// requires some nuance when it comes to path handling.
|
||||
self.handle_removed_folder(&old).await;
|
||||
self.handle_added_folder(new).await;
|
||||
self.handle_removed_folder(source, &old).await;
|
||||
self.handle_added_folder(source, new).await;
|
||||
}
|
||||
}
|
||||
AssetSourceEvent::RemovedUnknown { path, is_meta } => {
|
||||
match self.destination_reader().is_directory(&path).await {
|
||||
let processed_reader = source.processed_reader().unwrap();
|
||||
match processed_reader.is_directory(&path).await {
|
||||
Ok(is_directory) => {
|
||||
if is_directory {
|
||||
self.handle_removed_folder(&path).await;
|
||||
self.handle_removed_folder(source, &path).await;
|
||||
} else if is_meta {
|
||||
self.handle_removed_meta(&path).await;
|
||||
self.handle_removed_meta(source, path).await;
|
||||
} else {
|
||||
self.handle_removed_asset(path).await;
|
||||
self.handle_removed_asset(source, path).await;
|
||||
}
|
||||
}
|
||||
Err(err) => {
|
||||
if let AssetReaderError::NotFound(_) = err {
|
||||
// if the path is not found, a processed version does not exist
|
||||
} else {
|
||||
error!(
|
||||
"Path '{path:?}' as removed, but the destination reader could not determine if it \
|
||||
was a folder or a file due to the following error: {err}"
|
||||
);
|
||||
match err {
|
||||
AssetReaderError::NotFound(_) => {
|
||||
// if the path is not found, a processed version does not exist
|
||||
}
|
||||
AssetReaderError::Io(err) => {
|
||||
error!(
|
||||
"Path '{}' was removed, but the destination reader could not determine if it \
|
||||
was a folder or a file due to the following error: {err}",
|
||||
AssetPath::from_path(&path).with_source(source.id())
|
||||
);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
@ -302,38 +295,44 @@ impl AssetProcessor {
|
|||
}
|
||||
}
|
||||
|
||||
async fn handle_added_folder(&self, path: PathBuf) {
|
||||
debug!("Folder {:?} was added. Attempting to re-process", path);
|
||||
async fn handle_added_folder(&self, source: &AssetSource, path: PathBuf) {
|
||||
debug!(
|
||||
"Folder {} was added. Attempting to re-process",
|
||||
AssetPath::from_path(&path).with_source(source.id())
|
||||
);
|
||||
#[cfg(any(target_arch = "wasm32", not(feature = "multi-threaded")))]
|
||||
error!("AddFolder event cannot be handled in single threaded mode (or WASM) yet.");
|
||||
#[cfg(all(not(target_arch = "wasm32"), feature = "multi-threaded"))]
|
||||
IoTaskPool::get().scope(|scope| {
|
||||
scope.spawn(async move {
|
||||
self.process_assets_internal(scope, path).await.unwrap();
|
||||
self.process_assets_internal(scope, source, path)
|
||||
.await
|
||||
.unwrap();
|
||||
});
|
||||
});
|
||||
}
|
||||
|
||||
/// Responds to a removed meta event by reprocessing the asset at the given path.
|
||||
async fn handle_removed_meta(&self, path: &Path) {
|
||||
async fn handle_removed_meta(&self, source: &AssetSource, path: PathBuf) {
|
||||
// If meta was removed, we might need to regenerate it.
|
||||
// Likewise, the user might be manually re-adding the asset.
|
||||
// Therefore, we shouldn't automatically delete the asset ... that is a
|
||||
// user-initiated action.
|
||||
debug!(
|
||||
"Meta for asset {:?} was removed. Attempting to re-process",
|
||||
path
|
||||
AssetPath::from_path(&path).with_source(source.id())
|
||||
);
|
||||
self.process_asset(path).await;
|
||||
self.process_asset(source, path).await;
|
||||
}
|
||||
|
||||
/// Removes all processed assets stored at the given path (respecting transactionality), then removes the folder itself.
|
||||
async fn handle_removed_folder(&self, path: &Path) {
|
||||
async fn handle_removed_folder(&self, source: &AssetSource, path: &Path) {
|
||||
debug!("Removing folder {:?} because source was removed", path);
|
||||
match self.destination_reader().read_directory(path).await {
|
||||
let processed_reader = source.processed_reader().unwrap();
|
||||
match processed_reader.read_directory(path).await {
|
||||
Ok(mut path_stream) => {
|
||||
while let Some(child_path) = path_stream.next().await {
|
||||
self.handle_removed_asset(child_path).await;
|
||||
self.handle_removed_asset(source, child_path).await;
|
||||
}
|
||||
}
|
||||
Err(err) => match err {
|
||||
|
@ -349,28 +348,32 @@ impl AssetProcessor {
|
|||
}
|
||||
},
|
||||
}
|
||||
if let Err(AssetWriterError::Io(err)) =
|
||||
self.destination_writer().remove_directory(path).await
|
||||
{
|
||||
// we can ignore NotFound because if the "final" file in a folder was removed
|
||||
// then we automatically clean up this folder
|
||||
if err.kind() != ErrorKind::NotFound {
|
||||
error!("Failed to remove destination folder that no longer exists in asset source {path:?}: {err}");
|
||||
let processed_writer = source.processed_writer().unwrap();
|
||||
if let Err(err) = processed_writer.remove_directory(path).await {
|
||||
match err {
|
||||
AssetWriterError::Io(err) => {
|
||||
// we can ignore NotFound because if the "final" file in a folder was removed
|
||||
// then we automatically clean up this folder
|
||||
if err.kind() != ErrorKind::NotFound {
|
||||
let asset_path = AssetPath::from_path(path).with_source(source.id());
|
||||
error!("Failed to remove destination folder that no longer exists in {asset_path}: {err}");
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/// Removes the processed version of an asset and associated in-memory metadata. This will block until all existing reads/writes to the
|
||||
/// asset have finished, thanks to the `file_transaction_lock`.
|
||||
async fn handle_removed_asset(&self, path: PathBuf) {
|
||||
debug!("Removing processed {:?} because source was removed", path);
|
||||
let asset_path = AssetPath::from_path(path);
|
||||
async fn handle_removed_asset(&self, source: &AssetSource, path: PathBuf) {
|
||||
let asset_path = AssetPath::from(path).with_source(source.id());
|
||||
debug!("Removing processed {asset_path} because source was removed");
|
||||
let mut infos = self.data.asset_infos.write().await;
|
||||
if let Some(info) = infos.get(&asset_path) {
|
||||
// we must wait for uncontested write access to the asset source to ensure existing readers / writers
|
||||
// can finish their operations
|
||||
let _write_lock = info.file_transaction_lock.write();
|
||||
self.remove_processed_asset_and_meta(asset_path.path())
|
||||
self.remove_processed_asset_and_meta(source, asset_path.path())
|
||||
.await;
|
||||
}
|
||||
infos.remove(&asset_path).await;
|
||||
|
@ -378,22 +381,25 @@ impl AssetProcessor {
|
|||
|
||||
/// Handles a renamed source asset by moving it's processed results to the new location and updating in-memory paths + metadata.
|
||||
/// This will cause direct path dependencies to break.
|
||||
async fn handle_renamed_asset(&self, old: PathBuf, new: PathBuf) {
|
||||
async fn handle_renamed_asset(&self, source: &AssetSource, old: PathBuf, new: PathBuf) {
|
||||
let mut infos = self.data.asset_infos.write().await;
|
||||
let old_asset_path = AssetPath::from_path(old);
|
||||
if let Some(info) = infos.get(&old_asset_path) {
|
||||
let old = AssetPath::from(old).with_source(source.id());
|
||||
let new = AssetPath::from(new).with_source(source.id());
|
||||
let processed_writer = source.processed_writer().unwrap();
|
||||
if let Some(info) = infos.get(&old) {
|
||||
// we must wait for uncontested write access to the asset source to ensure existing readers / writers
|
||||
// can finish their operations
|
||||
let _write_lock = info.file_transaction_lock.write();
|
||||
let old = old_asset_path.path();
|
||||
self.destination_writer().rename(old, &new).await.unwrap();
|
||||
self.destination_writer()
|
||||
.rename_meta(old, &new)
|
||||
processed_writer
|
||||
.rename(old.path(), new.path())
|
||||
.await
|
||||
.unwrap();
|
||||
processed_writer
|
||||
.rename_meta(old.path(), new.path())
|
||||
.await
|
||||
.unwrap();
|
||||
}
|
||||
let new_asset_path = AssetPath::from_path(new);
|
||||
infos.rename(&old_asset_path, &new_asset_path).await;
|
||||
infos.rename(&old, &new).await;
|
||||
}
|
||||
|
||||
async fn finish_processing_assets(&self) {
|
||||
|
@ -408,19 +414,20 @@ impl AssetProcessor {
|
|||
fn process_assets_internal<'scope>(
|
||||
&'scope self,
|
||||
scope: &'scope bevy_tasks::Scope<'scope, '_, ()>,
|
||||
source: &'scope AssetSource,
|
||||
path: PathBuf,
|
||||
) -> bevy_utils::BoxedFuture<'scope, Result<(), AssetReaderError>> {
|
||||
Box::pin(async move {
|
||||
if self.source_reader().is_directory(&path).await? {
|
||||
let mut path_stream = self.source_reader().read_directory(&path).await?;
|
||||
if source.reader().is_directory(&path).await? {
|
||||
let mut path_stream = source.reader().read_directory(&path).await?;
|
||||
while let Some(path) = path_stream.next().await {
|
||||
self.process_assets_internal(scope, path).await?;
|
||||
self.process_assets_internal(scope, source, path).await?;
|
||||
}
|
||||
} else {
|
||||
// Files without extensions are skipped
|
||||
let processor = self.clone();
|
||||
scope.spawn(async move {
|
||||
processor.process_asset(&path).await;
|
||||
processor.process_asset(source, path).await;
|
||||
});
|
||||
}
|
||||
Ok(())
|
||||
|
@ -434,8 +441,9 @@ impl AssetProcessor {
|
|||
IoTaskPool::get().scope(|scope| {
|
||||
for path in check_reprocess_queue.drain(..) {
|
||||
let processor = self.clone();
|
||||
let source = self.get_source(path.source()).unwrap();
|
||||
scope.spawn(async move {
|
||||
processor.process_asset(&path).await;
|
||||
processor.process_asset(source, path.into()).await;
|
||||
});
|
||||
}
|
||||
});
|
||||
|
@ -471,7 +479,7 @@ impl AssetProcessor {
|
|||
processors.get(processor_type_name).cloned()
|
||||
}
|
||||
|
||||
/// Populates the initial view of each asset by scanning the source and destination folders.
|
||||
/// Populates the initial view of each asset by scanning the unprocessed and processed asset folders.
|
||||
/// This info will later be used to determine whether or not to re-process an asset
|
||||
///
|
||||
/// This will validate transactions and recover failed transactions when necessary.
|
||||
|
@ -512,68 +520,81 @@ impl AssetProcessor {
|
|||
})
|
||||
}
|
||||
|
||||
let mut source_paths = Vec::new();
|
||||
let source_reader = self.source_reader();
|
||||
get_asset_paths(source_reader, None, PathBuf::from(""), &mut source_paths)
|
||||
for source in self.sources().iter_processed() {
|
||||
let Ok(processed_reader) = source.processed_reader() else {
|
||||
continue;
|
||||
};
|
||||
let Ok(processed_writer) = source.processed_writer() else {
|
||||
continue;
|
||||
};
|
||||
let mut unprocessed_paths = Vec::new();
|
||||
get_asset_paths(
|
||||
source.reader(),
|
||||
None,
|
||||
PathBuf::from(""),
|
||||
&mut unprocessed_paths,
|
||||
)
|
||||
.await
|
||||
.map_err(InitializeError::FailedToReadSourcePaths)?;
|
||||
|
||||
let mut destination_paths = Vec::new();
|
||||
let destination_reader = self.destination_reader();
|
||||
let destination_writer = self.destination_writer();
|
||||
get_asset_paths(
|
||||
destination_reader,
|
||||
Some(destination_writer),
|
||||
PathBuf::from(""),
|
||||
&mut destination_paths,
|
||||
)
|
||||
.await
|
||||
.map_err(InitializeError::FailedToReadDestinationPaths)?;
|
||||
let mut processed_paths = Vec::new();
|
||||
get_asset_paths(
|
||||
processed_reader,
|
||||
Some(processed_writer),
|
||||
PathBuf::from(""),
|
||||
&mut processed_paths,
|
||||
)
|
||||
.await
|
||||
.map_err(InitializeError::FailedToReadDestinationPaths)?;
|
||||
|
||||
for path in &source_paths {
|
||||
asset_infos.get_or_insert(AssetPath::from_path(path.clone()));
|
||||
}
|
||||
|
||||
for path in &destination_paths {
|
||||
let asset_path = AssetPath::from_path(path.clone());
|
||||
let mut dependencies = Vec::new();
|
||||
if let Some(info) = asset_infos.get_mut(&asset_path) {
|
||||
match self.destination_reader().read_meta_bytes(path).await {
|
||||
Ok(meta_bytes) => {
|
||||
match ron::de::from_bytes::<ProcessedInfoMinimal>(&meta_bytes) {
|
||||
Ok(minimal) => {
|
||||
trace!(
|
||||
"Populated processed info for asset {path:?} {:?}",
|
||||
minimal.processed_info
|
||||
);
|
||||
|
||||
if let Some(processed_info) = &minimal.processed_info {
|
||||
for process_dependency_info in
|
||||
&processed_info.process_dependencies
|
||||
{
|
||||
dependencies.push(process_dependency_info.path.clone());
|
||||
}
|
||||
}
|
||||
info.processed_info = minimal.processed_info;
|
||||
}
|
||||
Err(err) => {
|
||||
trace!("Removing processed data for {path:?} because meta could not be parsed: {err}");
|
||||
self.remove_processed_asset_and_meta(path).await;
|
||||
}
|
||||
}
|
||||
}
|
||||
Err(err) => {
|
||||
trace!("Removing processed data for {path:?} because meta failed to load: {err}");
|
||||
self.remove_processed_asset_and_meta(path).await;
|
||||
}
|
||||
}
|
||||
} else {
|
||||
trace!("Removing processed data for non-existent asset {path:?}");
|
||||
self.remove_processed_asset_and_meta(path).await;
|
||||
for path in unprocessed_paths {
|
||||
asset_infos.get_or_insert(AssetPath::from(path).with_source(source.id()));
|
||||
}
|
||||
|
||||
for dependency in dependencies {
|
||||
asset_infos.add_dependant(&dependency, asset_path.clone());
|
||||
for path in processed_paths {
|
||||
let mut dependencies = Vec::new();
|
||||
let asset_path = AssetPath::from(path).with_source(source.id());
|
||||
if let Some(info) = asset_infos.get_mut(&asset_path) {
|
||||
match processed_reader.read_meta_bytes(asset_path.path()).await {
|
||||
Ok(meta_bytes) => {
|
||||
match ron::de::from_bytes::<ProcessedInfoMinimal>(&meta_bytes) {
|
||||
Ok(minimal) => {
|
||||
trace!(
|
||||
"Populated processed info for asset {asset_path} {:?}",
|
||||
minimal.processed_info
|
||||
);
|
||||
|
||||
if let Some(processed_info) = &minimal.processed_info {
|
||||
for process_dependency_info in
|
||||
&processed_info.process_dependencies
|
||||
{
|
||||
dependencies.push(process_dependency_info.path.clone());
|
||||
}
|
||||
}
|
||||
info.processed_info = minimal.processed_info;
|
||||
}
|
||||
Err(err) => {
|
||||
trace!("Removing processed data for {asset_path} because meta could not be parsed: {err}");
|
||||
self.remove_processed_asset_and_meta(source, asset_path.path())
|
||||
.await;
|
||||
}
|
||||
}
|
||||
}
|
||||
Err(err) => {
|
||||
trace!("Removing processed data for {asset_path} because meta failed to load: {err}");
|
||||
self.remove_processed_asset_and_meta(source, asset_path.path())
|
||||
.await;
|
||||
}
|
||||
}
|
||||
} else {
|
||||
trace!("Removing processed data for non-existent asset {asset_path}");
|
||||
self.remove_processed_asset_and_meta(source, asset_path.path())
|
||||
.await;
|
||||
}
|
||||
|
||||
for dependency in dependencies {
|
||||
asset_infos.add_dependant(&dependency, asset_path.clone());
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -584,19 +605,20 @@ impl AssetProcessor {
|
|||
|
||||
/// Removes the processed version of an asset and its metadata, if it exists. This _is not_ transactional like `remove_processed_asset_transactional`, nor
|
||||
/// does it remove existing in-memory metadata.
|
||||
async fn remove_processed_asset_and_meta(&self, path: &Path) {
|
||||
if let Err(err) = self.destination_writer().remove(path).await {
|
||||
async fn remove_processed_asset_and_meta(&self, source: &AssetSource, path: &Path) {
|
||||
if let Err(err) = source.processed_writer().unwrap().remove(path).await {
|
||||
warn!("Failed to remove non-existent asset {path:?}: {err}");
|
||||
}
|
||||
|
||||
if let Err(err) = self.destination_writer().remove_meta(path).await {
|
||||
if let Err(err) = source.processed_writer().unwrap().remove_meta(path).await {
|
||||
warn!("Failed to remove non-existent meta {path:?}: {err}");
|
||||
}
|
||||
|
||||
self.clean_empty_processed_ancestor_folders(path).await;
|
||||
self.clean_empty_processed_ancestor_folders(source, path)
|
||||
.await;
|
||||
}
|
||||
|
||||
async fn clean_empty_processed_ancestor_folders(&self, path: &Path) {
|
||||
async fn clean_empty_processed_ancestor_folders(&self, source: &AssetSource, path: &Path) {
|
||||
// As a safety precaution don't delete absolute paths to avoid deleting folders outside of the destination folder
|
||||
if path.is_absolute() {
|
||||
error!("Attempted to clean up ancestor folders of an absolute path. This is unsafe so the operation was skipped.");
|
||||
|
@ -606,8 +628,9 @@ impl AssetProcessor {
|
|||
if parent == Path::new("") {
|
||||
break;
|
||||
}
|
||||
if self
|
||||
.destination_writer()
|
||||
if source
|
||||
.processed_writer()
|
||||
.unwrap()
|
||||
.remove_empty_directory(parent)
|
||||
.await
|
||||
.is_err()
|
||||
|
@ -624,33 +647,39 @@ impl AssetProcessor {
|
|||
/// to block reads until the asset is processed).
|
||||
///
|
||||
/// [`LoadContext`]: crate::loader::LoadContext
|
||||
async fn process_asset(&self, path: &Path) {
|
||||
let result = self.process_asset_internal(path).await;
|
||||
/// [`ProcessorGatedReader`]: crate::io::processor_gated::ProcessorGatedReader
|
||||
async fn process_asset(&self, source: &AssetSource, path: PathBuf) {
|
||||
let asset_path = AssetPath::from(path).with_source(source.id());
|
||||
let result = self.process_asset_internal(source, &asset_path).await;
|
||||
let mut infos = self.data.asset_infos.write().await;
|
||||
let asset_path = AssetPath::from_path(path.to_owned());
|
||||
infos.finish_processing(asset_path, result).await;
|
||||
}
|
||||
|
||||
async fn process_asset_internal(&self, path: &Path) -> Result<ProcessResult, ProcessError> {
|
||||
if path.extension().is_none() {
|
||||
return Err(ProcessError::ExtensionRequired);
|
||||
}
|
||||
let asset_path = AssetPath::from_path(path.to_path_buf());
|
||||
async fn process_asset_internal(
|
||||
&self,
|
||||
source: &AssetSource,
|
||||
asset_path: &AssetPath<'static>,
|
||||
) -> Result<ProcessResult, ProcessError> {
|
||||
// TODO: The extension check was removed now tht AssetPath is the input. is that ok?
|
||||
// TODO: check if already processing to protect against duplicate hot-reload events
|
||||
debug!("Processing {:?}", path);
|
||||
debug!("Processing {:?}", asset_path);
|
||||
let server = &self.server;
|
||||
let path = asset_path.path();
|
||||
let reader = source.reader();
|
||||
|
||||
let reader_err = |err| ProcessError::AssetReaderError {
|
||||
path: asset_path.clone(),
|
||||
err,
|
||||
};
|
||||
let writer_err = |err| ProcessError::AssetWriterError {
|
||||
path: asset_path.clone(),
|
||||
err,
|
||||
};
|
||||
|
||||
// Note: we get the asset source reader first because we don't want to create meta files for assets that don't have source files
|
||||
let mut reader = self.source_reader().read(path).await.map_err(|e| match e {
|
||||
AssetReaderError::NotFound(_) => ProcessError::MissingAssetSource(path.to_owned()),
|
||||
AssetReaderError::Io(err) => ProcessError::AssetSourceIoError(err),
|
||||
})?;
|
||||
let mut byte_reader = reader.read(path).await.map_err(reader_err)?;
|
||||
|
||||
let (mut source_meta, meta_bytes, processor) = match self
|
||||
.source_reader()
|
||||
.read_meta_bytes(path)
|
||||
.await
|
||||
{
|
||||
let (mut source_meta, meta_bytes, processor) = match reader.read_meta_bytes(path).await {
|
||||
Ok(meta_bytes) => {
|
||||
let minimal: AssetMetaMinimal = ron::de::from_bytes(&meta_bytes).map_err(|e| {
|
||||
ProcessError::DeserializeMetaError(DeserializeMetaError::DeserializeMinimal(e))
|
||||
|
@ -684,7 +713,7 @@ impl AssetProcessor {
|
|||
let meta = processor.default_meta();
|
||||
(meta, Some(processor))
|
||||
} else {
|
||||
match server.get_path_asset_loader(&asset_path).await {
|
||||
match server.get_path_asset_loader(asset_path.clone()).await {
|
||||
Ok(loader) => (loader.default_meta(), None),
|
||||
Err(MissingAssetLoaderForExtensionError { .. }) => {
|
||||
let meta: Box<dyn AssetMetaDyn> =
|
||||
|
@ -695,19 +724,31 @@ impl AssetProcessor {
|
|||
};
|
||||
let meta_bytes = meta.serialize();
|
||||
// write meta to source location if it doesn't already exist
|
||||
self.source_writer()
|
||||
source
|
||||
.writer()?
|
||||
.write_meta_bytes(path, &meta_bytes)
|
||||
.await?;
|
||||
.await
|
||||
.map_err(writer_err)?;
|
||||
(meta, meta_bytes, processor)
|
||||
}
|
||||
Err(err) => return Err(ProcessError::ReadAssetMetaError(err)),
|
||||
Err(err) => {
|
||||
return Err(ProcessError::ReadAssetMetaError {
|
||||
path: asset_path.clone(),
|
||||
err,
|
||||
})
|
||||
}
|
||||
};
|
||||
|
||||
let processed_writer = source.processed_writer()?;
|
||||
|
||||
let mut asset_bytes = Vec::new();
|
||||
reader
|
||||
byte_reader
|
||||
.read_to_end(&mut asset_bytes)
|
||||
.await
|
||||
.map_err(ProcessError::AssetSourceIoError)?;
|
||||
.map_err(|e| ProcessError::AssetReaderError {
|
||||
path: asset_path.clone(),
|
||||
err: AssetReaderError::Io(e),
|
||||
})?;
|
||||
|
||||
// PERF: in theory these hashes could be streamed if we want to avoid allocating the whole asset.
|
||||
// The downside is that reading assets would need to happen twice (once for the hash and once for the asset loader)
|
||||
|
@ -722,7 +763,7 @@ impl AssetProcessor {
|
|||
{
|
||||
let infos = self.data.asset_infos.read().await;
|
||||
if let Some(current_processed_info) = infos
|
||||
.get(&asset_path)
|
||||
.get(asset_path)
|
||||
.and_then(|i| i.processed_info.as_ref())
|
||||
{
|
||||
if current_processed_info.hash == new_hash {
|
||||
|
@ -754,18 +795,24 @@ impl AssetProcessor {
|
|||
// NOTE: if processing the asset fails this will produce an "unfinished" log entry, forcing a rebuild on next run.
|
||||
// Directly writing to the asset destination in the processor necessitates this behavior
|
||||
// TODO: this class of failure can be recovered via re-processing + smarter log validation that allows for duplicate transactions in the event of failures
|
||||
self.log_begin_processing(path).await;
|
||||
self.log_begin_processing(asset_path).await;
|
||||
if let Some(processor) = processor {
|
||||
let mut writer = self.destination_writer().write(path).await?;
|
||||
let mut writer = processed_writer.write(path).await.map_err(writer_err)?;
|
||||
let mut processed_meta = {
|
||||
let mut context =
|
||||
ProcessContext::new(self, &asset_path, &asset_bytes, &mut new_processed_info);
|
||||
ProcessContext::new(self, asset_path, &asset_bytes, &mut new_processed_info);
|
||||
processor
|
||||
.process(&mut context, source_meta, &mut *writer)
|
||||
.await?
|
||||
};
|
||||
|
||||
writer.flush().await.map_err(AssetWriterError::Io)?;
|
||||
writer
|
||||
.flush()
|
||||
.await
|
||||
.map_err(|e| ProcessError::AssetWriterError {
|
||||
path: asset_path.clone(),
|
||||
err: AssetWriterError::Io(e),
|
||||
})?;
|
||||
|
||||
let full_hash = get_full_asset_hash(
|
||||
new_hash,
|
||||
|
@ -777,20 +824,23 @@ impl AssetProcessor {
|
|||
new_processed_info.full_hash = full_hash;
|
||||
*processed_meta.processed_info_mut() = Some(new_processed_info.clone());
|
||||
let meta_bytes = processed_meta.serialize();
|
||||
self.destination_writer()
|
||||
processed_writer
|
||||
.write_meta_bytes(path, &meta_bytes)
|
||||
.await?;
|
||||
.await
|
||||
.map_err(writer_err)?;
|
||||
} else {
|
||||
self.destination_writer()
|
||||
processed_writer
|
||||
.write_bytes(path, &asset_bytes)
|
||||
.await?;
|
||||
.await
|
||||
.map_err(writer_err)?;
|
||||
*source_meta.processed_info_mut() = Some(new_processed_info.clone());
|
||||
let meta_bytes = source_meta.serialize();
|
||||
self.destination_writer()
|
||||
processed_writer
|
||||
.write_meta_bytes(path, &meta_bytes)
|
||||
.await?;
|
||||
.await
|
||||
.map_err(writer_err)?;
|
||||
}
|
||||
self.log_end_processing(path).await;
|
||||
self.log_end_processing(asset_path).await;
|
||||
|
||||
Ok(ProcessResult::Processed(new_processed_info))
|
||||
}
|
||||
|
@ -818,27 +868,35 @@ impl AssetProcessor {
|
|||
}
|
||||
LogEntryError::UnfinishedTransaction(path) => {
|
||||
debug!("Asset {path:?} did not finish processing. Clearing state for that asset");
|
||||
if let Err(err) = self.destination_writer().remove(&path).await {
|
||||
let mut unrecoverable_err = |message: &dyn std::fmt::Display| {
|
||||
error!("Failed to remove asset {path:?}: {message}");
|
||||
state_is_valid = false;
|
||||
};
|
||||
let Ok(source) = self.get_source(path.source()) else {
|
||||
(unrecoverable_err)(&"AssetSource does not exist");
|
||||
continue;
|
||||
};
|
||||
let Ok(processed_writer) = source.processed_writer() else {
|
||||
(unrecoverable_err)(&"AssetSource does not have a processed AssetWriter registered");
|
||||
continue;
|
||||
};
|
||||
|
||||
if let Err(err) = processed_writer.remove(path.path()).await {
|
||||
match err {
|
||||
AssetWriterError::Io(err) => {
|
||||
// any error but NotFound means we could be in a bad state
|
||||
if err.kind() != ErrorKind::NotFound {
|
||||
error!("Failed to remove asset {path:?}: {err}");
|
||||
state_is_valid = false;
|
||||
(unrecoverable_err)(&err);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
if let Err(err) = self.destination_writer().remove_meta(&path).await
|
||||
{
|
||||
if let Err(err) = processed_writer.remove_meta(path.path()).await {
|
||||
match err {
|
||||
AssetWriterError::Io(err) => {
|
||||
// any error but NotFound means we could be in a bad state
|
||||
if err.kind() != ErrorKind::NotFound {
|
||||
error!(
|
||||
"Failed to remove asset meta {path:?}: {err}"
|
||||
);
|
||||
state_is_valid = false;
|
||||
(unrecoverable_err)(&err);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
@ -852,12 +910,16 @@ impl AssetProcessor {
|
|||
|
||||
if !state_is_valid {
|
||||
error!("Processed asset transaction log state was invalid and unrecoverable for some reason (see previous logs). Removing processed assets and starting fresh.");
|
||||
if let Err(err) = self
|
||||
.destination_writer()
|
||||
.remove_assets_in_directory(Path::new(""))
|
||||
.await
|
||||
{
|
||||
panic!("Processed assets were in a bad state. To correct this, the asset processor attempted to remove all processed assets and start from scratch. This failed. There is no way to continue. Try restarting, or deleting imported asset folder manually. {err}");
|
||||
for source in self.sources().iter_processed() {
|
||||
let Ok(processed_writer) = source.processed_writer() else {
|
||||
continue;
|
||||
};
|
||||
if let Err(err) = processed_writer
|
||||
.remove_assets_in_directory(Path::new(""))
|
||||
.await
|
||||
{
|
||||
panic!("Processed assets were in a bad state. To correct this, the asset processor attempted to remove all processed assets and start from scratch. This failed. There is no way to continue. Try restarting, or deleting imported asset folder manually. {err}");
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
@ -870,35 +932,20 @@ impl AssetProcessor {
|
|||
}
|
||||
|
||||
impl AssetProcessorData {
|
||||
pub fn new(
|
||||
source_reader: Box<dyn AssetReader>,
|
||||
source_writer: Box<dyn AssetWriter>,
|
||||
destination_reader: Box<dyn AssetReader>,
|
||||
destination_writer: Box<dyn AssetWriter>,
|
||||
) -> Self {
|
||||
pub fn new(source: AssetSources) -> Self {
|
||||
let (mut finished_sender, finished_receiver) = async_broadcast::broadcast(1);
|
||||
let (mut initialized_sender, initialized_receiver) = async_broadcast::broadcast(1);
|
||||
// allow overflow on these "one slot" channels to allow receivers to retrieve the "latest" state, and to allow senders to
|
||||
// not block if there was older state present.
|
||||
finished_sender.set_overflow(true);
|
||||
initialized_sender.set_overflow(true);
|
||||
let (source_event_sender, source_event_receiver) = crossbeam_channel::unbounded();
|
||||
// TODO: watching for changes could probably be entirely optional / we could just warn here
|
||||
let source_watcher = source_reader.watch_for_changes(source_event_sender);
|
||||
if source_watcher.is_none() {
|
||||
error!("{}", CANNOT_WATCH_ERROR_MESSAGE);
|
||||
}
|
||||
|
||||
AssetProcessorData {
|
||||
source_reader,
|
||||
source_writer,
|
||||
destination_reader,
|
||||
destination_writer,
|
||||
sources: source,
|
||||
finished_sender,
|
||||
finished_receiver,
|
||||
initialized_sender,
|
||||
initialized_receiver,
|
||||
source_event_receiver,
|
||||
_source_watcher: source_watcher,
|
||||
state: async_lock::RwLock::new(ProcessorState::Initializing),
|
||||
log: Default::default(),
|
||||
processors: Default::default(),
|
||||
|
@ -908,11 +955,11 @@ impl AssetProcessorData {
|
|||
}
|
||||
|
||||
/// Returns a future that will not finish until the path has been processed.
|
||||
pub async fn wait_until_processed(&self, path: &Path) -> ProcessStatus {
|
||||
pub async fn wait_until_processed(&self, path: AssetPath<'static>) -> ProcessStatus {
|
||||
self.wait_until_initialized().await;
|
||||
let mut receiver = {
|
||||
let infos = self.asset_infos.write().await;
|
||||
let info = infos.get(&AssetPath::from_path(path.to_path_buf()));
|
||||
let info = infos.get(&path);
|
||||
match info {
|
||||
Some(info) => match info.status {
|
||||
Some(result) => return result,
|
||||
|
@ -1038,7 +1085,7 @@ pub struct ProcessorAssetInfos {
|
|||
/// Therefore this _must_ always be consistent with the `infos` data. If a new asset is added to `infos`, it should
|
||||
/// check this maps for dependencies and add them. If an asset is removed, it should update the dependants here.
|
||||
non_existent_dependants: HashMap<AssetPath<'static>, HashSet<AssetPath<'static>>>,
|
||||
check_reprocess_queue: VecDeque<PathBuf>,
|
||||
check_reprocess_queue: VecDeque<AssetPath<'static>>,
|
||||
}
|
||||
|
||||
impl ProcessorAssetInfos {
|
||||
|
@ -1100,7 +1147,7 @@ impl ProcessorAssetInfos {
|
|||
info.update_status(ProcessStatus::Processed).await;
|
||||
let dependants = info.dependants.iter().cloned().collect::<Vec<_>>();
|
||||
for path in dependants {
|
||||
self.check_reprocess_queue.push_back(path.path().to_owned());
|
||||
self.check_reprocess_queue.push_back(path);
|
||||
}
|
||||
}
|
||||
Ok(ProcessResult::SkippedNotChanged) => {
|
||||
|
@ -1118,20 +1165,21 @@ impl ProcessorAssetInfos {
|
|||
// Skip assets without extensions
|
||||
}
|
||||
Err(ProcessError::MissingAssetLoaderForExtension(_)) => {
|
||||
trace!("No loader found for {:?}", asset_path);
|
||||
trace!("No loader found for {asset_path}");
|
||||
}
|
||||
Err(ProcessError::MissingAssetSource(_)) => {
|
||||
Err(ProcessError::AssetReaderError {
|
||||
err: AssetReaderError::NotFound(_),
|
||||
..
|
||||
}) => {
|
||||
// if there is no asset source, no processing can be done
|
||||
trace!(
|
||||
"No need to process asset {:?} because it does not exist",
|
||||
asset_path
|
||||
);
|
||||
trace!("No need to process asset {asset_path} because it does not exist");
|
||||
}
|
||||
Err(err) => {
|
||||
error!("Failed to process asset {:?}: {:?}", asset_path, err);
|
||||
error!("Failed to process asset {asset_path}: {err}");
|
||||
// if this failed because a dependency could not be loaded, make sure it is reprocessed if that dependency is reprocessed
|
||||
if let ProcessError::AssetLoadError(AssetLoadError::CannotLoadDependency {
|
||||
if let ProcessError::AssetLoadError(AssetLoadError::AssetLoaderError {
|
||||
path: dependency,
|
||||
..
|
||||
}) = err
|
||||
{
|
||||
let info = self.get_mut(&asset_path).expect("info should exist");
|
||||
|
@ -1220,10 +1268,10 @@ impl ProcessorAssetInfos {
|
|||
new_info.dependants.iter().cloned().collect()
|
||||
};
|
||||
// Queue the asset for a reprocess check, in case it needs new meta.
|
||||
self.check_reprocess_queue.push_back(new.path().to_owned());
|
||||
self.check_reprocess_queue.push_back(new.clone());
|
||||
for dependant in dependants {
|
||||
// Queue dependants for reprocessing because they might have been waiting for this asset.
|
||||
self.check_reprocess_queue.push_back(dependant.into());
|
||||
self.check_reprocess_queue.push_back(dependant);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
|
@ -1,5 +1,8 @@
|
|||
use crate::{
|
||||
io::{AssetReaderError, AssetWriterError, Writer},
|
||||
io::{
|
||||
AssetReaderError, AssetWriterError, MissingAssetWriterError,
|
||||
MissingProcessedAssetReaderError, MissingProcessedAssetWriterError, Writer,
|
||||
},
|
||||
meta::{AssetAction, AssetMeta, AssetMetaDyn, ProcessDependencyInfo, ProcessedInfo, Settings},
|
||||
processor::AssetProcessor,
|
||||
saver::{AssetSaver, SavedAsset},
|
||||
|
@ -8,7 +11,7 @@ use crate::{
|
|||
};
|
||||
use bevy_utils::BoxedFuture;
|
||||
use serde::{Deserialize, Serialize};
|
||||
use std::{marker::PhantomData, path::PathBuf};
|
||||
use std::marker::PhantomData;
|
||||
use thiserror::Error;
|
||||
|
||||
/// Asset "processor" logic that reads input asset bytes (stored on [`ProcessContext`]), processes the value in some way,
|
||||
|
@ -70,20 +73,33 @@ pub struct LoadAndSaveSettings<LoaderSettings, SaverSettings> {
|
|||
/// An error that is encountered during [`Process::process`].
|
||||
#[derive(Error, Debug)]
|
||||
pub enum ProcessError {
|
||||
#[error("The asset source file for '{0}' does not exist")]
|
||||
MissingAssetSource(PathBuf),
|
||||
#[error(transparent)]
|
||||
AssetSourceIoError(std::io::Error),
|
||||
#[error(transparent)]
|
||||
MissingAssetLoaderForExtension(#[from] MissingAssetLoaderForExtensionError),
|
||||
#[error(transparent)]
|
||||
MissingAssetLoaderForTypeName(#[from] MissingAssetLoaderForTypeNameError),
|
||||
#[error("The processor '{0}' does not exist")]
|
||||
MissingProcessor(String),
|
||||
#[error("Encountered an AssetReader error for '{path}': {err}")]
|
||||
AssetReaderError {
|
||||
path: AssetPath<'static>,
|
||||
err: AssetReaderError,
|
||||
},
|
||||
#[error("Encountered an AssetWriter error for '{path}': {err}")]
|
||||
AssetWriterError {
|
||||
path: AssetPath<'static>,
|
||||
err: AssetWriterError,
|
||||
},
|
||||
#[error(transparent)]
|
||||
AssetWriterError(#[from] AssetWriterError),
|
||||
#[error("Failed to read asset metadata {0:?}")]
|
||||
ReadAssetMetaError(AssetReaderError),
|
||||
MissingAssetWriterError(#[from] MissingAssetWriterError),
|
||||
#[error(transparent)]
|
||||
MissingProcessedAssetReaderError(#[from] MissingProcessedAssetReaderError),
|
||||
#[error(transparent)]
|
||||
MissingProcessedAssetWriterError(#[from] MissingProcessedAssetWriterError),
|
||||
#[error("Failed to read asset metadata for {path}: {err}")]
|
||||
ReadAssetMetaError {
|
||||
path: AssetPath<'static>,
|
||||
err: AssetReaderError,
|
||||
},
|
||||
#[error(transparent)]
|
||||
DeserializeMetaError(#[from] DeserializeMetaError),
|
||||
#[error(transparent)]
|
||||
|
|
|
@ -2,7 +2,10 @@ mod info;
|
|||
|
||||
use crate::{
|
||||
folder::LoadedFolder,
|
||||
io::{AssetReader, AssetReaderError, AssetSourceEvent, AssetWatcher, Reader},
|
||||
io::{
|
||||
AssetReader, AssetReaderError, AssetSource, AssetSourceEvent, AssetSourceId, AssetSources,
|
||||
MissingAssetSourceError, MissingProcessedAssetReaderError, Reader,
|
||||
},
|
||||
loader::{AssetLoader, ErasedAssetLoader, LoadContext, LoadedAsset},
|
||||
meta::{
|
||||
loader_settings_meta_transform, AssetActionMinimal, AssetMetaDyn, AssetMetaMinimal,
|
||||
|
@ -48,52 +51,53 @@ pub(crate) struct AssetServerData {
|
|||
pub(crate) loaders: Arc<RwLock<AssetLoaders>>,
|
||||
asset_event_sender: Sender<InternalAssetEvent>,
|
||||
asset_event_receiver: Receiver<InternalAssetEvent>,
|
||||
source_event_receiver: Receiver<AssetSourceEvent>,
|
||||
reader: Box<dyn AssetReader>,
|
||||
_watcher: Option<Box<dyn AssetWatcher>>,
|
||||
sources: AssetSources,
|
||||
mode: AssetServerMode,
|
||||
}
|
||||
|
||||
/// The "asset mode" the server is currently in.
|
||||
#[derive(Clone, Copy, Debug, PartialEq, Eq)]
|
||||
pub enum AssetServerMode {
|
||||
/// This server loads unprocessed assets.
|
||||
Unprocessed,
|
||||
/// This server loads processed assets.
|
||||
Processed,
|
||||
}
|
||||
|
||||
impl AssetServer {
|
||||
/// Create a new instance of [`AssetServer`]. If `watch_for_changes` is true, the [`AssetReader`] storage will watch for changes to
|
||||
/// asset sources and hot-reload them.
|
||||
pub fn new(reader: Box<dyn AssetReader>, watch_for_changes: bool) -> Self {
|
||||
Self::new_with_loaders(reader, Default::default(), watch_for_changes)
|
||||
pub fn new(sources: AssetSources, mode: AssetServerMode, watching_for_changes: bool) -> Self {
|
||||
Self::new_with_loaders(sources, Default::default(), mode, watching_for_changes)
|
||||
}
|
||||
|
||||
pub(crate) fn new_with_loaders(
|
||||
reader: Box<dyn AssetReader>,
|
||||
sources: AssetSources,
|
||||
loaders: Arc<RwLock<AssetLoaders>>,
|
||||
watch_for_changes: bool,
|
||||
mode: AssetServerMode,
|
||||
watching_for_changes: bool,
|
||||
) -> Self {
|
||||
let (asset_event_sender, asset_event_receiver) = crossbeam_channel::unbounded();
|
||||
let (source_event_sender, source_event_receiver) = crossbeam_channel::unbounded();
|
||||
let mut infos = AssetInfos::default();
|
||||
let watcher = if watch_for_changes {
|
||||
infos.watching_for_changes = true;
|
||||
let watcher = reader.watch_for_changes(source_event_sender);
|
||||
if watcher.is_none() {
|
||||
error!("{}", CANNOT_WATCH_ERROR_MESSAGE);
|
||||
}
|
||||
watcher
|
||||
} else {
|
||||
None
|
||||
};
|
||||
infos.watching_for_changes = watching_for_changes;
|
||||
Self {
|
||||
data: Arc::new(AssetServerData {
|
||||
reader,
|
||||
_watcher: watcher,
|
||||
sources,
|
||||
mode,
|
||||
asset_event_sender,
|
||||
asset_event_receiver,
|
||||
source_event_receiver,
|
||||
loaders,
|
||||
infos: RwLock::new(infos),
|
||||
}),
|
||||
}
|
||||
}
|
||||
|
||||
/// Returns the primary [`AssetReader`].
|
||||
pub fn reader(&self) -> &dyn AssetReader {
|
||||
&*self.data.reader
|
||||
/// Retrieves the [`AssetReader`] for the given `source`.
|
||||
pub fn get_source<'a>(
|
||||
&'a self,
|
||||
source: impl Into<AssetSourceId<'a>>,
|
||||
) -> Result<&'a AssetSource, MissingAssetSourceError> {
|
||||
self.data.sources.get(source.into())
|
||||
}
|
||||
|
||||
/// Registers a new [`AssetLoader`]. [`AssetLoader`]s must be registered before they can be used.
|
||||
|
@ -450,28 +454,30 @@ impl AssetServer {
|
|||
/// contain handles to all assets in the folder. You can wait for all assets to load by checking the [`LoadedFolder`]'s
|
||||
/// [`RecursiveDependencyLoadState`].
|
||||
#[must_use = "not using the returned strong handle may result in the unexpected release of the assets"]
|
||||
pub fn load_folder(&self, path: impl AsRef<Path>) -> Handle<LoadedFolder> {
|
||||
pub fn load_folder<'a>(&self, path: impl Into<AssetPath<'a>>) -> Handle<LoadedFolder> {
|
||||
let handle = {
|
||||
let mut infos = self.data.infos.write();
|
||||
infos.create_loading_handle::<LoadedFolder>()
|
||||
};
|
||||
let id = handle.id().untyped();
|
||||
let path = path.into().into_owned();
|
||||
|
||||
fn load_folder<'a>(
|
||||
path: &'a Path,
|
||||
reader: &'a dyn AssetReader,
|
||||
server: &'a AssetServer,
|
||||
handles: &'a mut Vec<UntypedHandle>,
|
||||
) -> bevy_utils::BoxedFuture<'a, Result<(), AssetLoadError>> {
|
||||
Box::pin(async move {
|
||||
let is_dir = server.reader().is_directory(path).await?;
|
||||
let is_dir = reader.is_directory(path).await?;
|
||||
if is_dir {
|
||||
let mut path_stream = server.reader().read_directory(path.as_ref()).await?;
|
||||
let mut path_stream = reader.read_directory(path.as_ref()).await?;
|
||||
while let Some(child_path) = path_stream.next().await {
|
||||
if server.reader().is_directory(&child_path).await? {
|
||||
load_folder(&child_path, server, handles).await?;
|
||||
if reader.is_directory(&child_path).await? {
|
||||
load_folder(&child_path, reader, server, handles).await?;
|
||||
} else {
|
||||
let path = child_path.to_str().expect("Path should be a valid string.");
|
||||
match server.load_untyped_async(AssetPath::new(path)).await {
|
||||
match server.load_untyped_async(AssetPath::parse(path)).await {
|
||||
Ok(handle) => handles.push(handle),
|
||||
// skip assets that cannot be loaded
|
||||
Err(
|
||||
|
@ -488,11 +494,32 @@ impl AssetServer {
|
|||
}
|
||||
|
||||
let server = self.clone();
|
||||
let owned_path = path.as_ref().to_owned();
|
||||
IoTaskPool::get()
|
||||
.spawn(async move {
|
||||
let Ok(source) = server.get_source(path.source()) else {
|
||||
error!(
|
||||
"Failed to load {path}. AssetSource {:?} does not exist",
|
||||
path.source()
|
||||
);
|
||||
return;
|
||||
};
|
||||
|
||||
let asset_reader = match server.data.mode {
|
||||
AssetServerMode::Unprocessed { .. } => source.reader(),
|
||||
AssetServerMode::Processed { .. } => match source.processed_reader() {
|
||||
Ok(reader) => reader,
|
||||
Err(_) => {
|
||||
error!(
|
||||
"Failed to load {path}. AssetSource {:?} does not have a processed AssetReader",
|
||||
path.source()
|
||||
);
|
||||
return;
|
||||
}
|
||||
},
|
||||
};
|
||||
|
||||
let mut handles = Vec::new();
|
||||
match load_folder(&owned_path, &server, &mut handles).await {
|
||||
match load_folder(path.path(), asset_reader, &server, &mut handles).await {
|
||||
Ok(_) => server.send_asset_event(InternalAssetEvent::Loaded {
|
||||
id,
|
||||
loaded_asset: LoadedAsset::new_with_dependencies(
|
||||
|
@ -586,6 +613,11 @@ impl AssetServer {
|
|||
Some(info.path.as_ref()?.clone())
|
||||
}
|
||||
|
||||
/// Returns the [`AssetServerMode`] this server is currently in.
|
||||
pub fn mode(&self) -> AssetServerMode {
|
||||
self.data.mode
|
||||
}
|
||||
|
||||
/// Pre-register a loader that will later be added.
|
||||
///
|
||||
/// Assets loaded with matching extensions will be blocked until the
|
||||
|
@ -641,34 +673,43 @@ impl AssetServer {
|
|||
),
|
||||
AssetLoadError,
|
||||
> {
|
||||
let source = self.get_source(asset_path.source())?;
|
||||
// NOTE: We grab the asset byte reader first to ensure this is transactional for AssetReaders like ProcessorGatedReader
|
||||
// The asset byte reader will "lock" the processed asset, preventing writes for the duration of the lock.
|
||||
// Then the meta reader, if meta exists, will correspond to the meta for the current "version" of the asset.
|
||||
// See ProcessedAssetInfo::file_transaction_lock for more context
|
||||
let reader = self.data.reader.read(asset_path.path()).await?;
|
||||
match self.data.reader.read_meta_bytes(asset_path.path()).await {
|
||||
let asset_reader = match self.data.mode {
|
||||
AssetServerMode::Unprocessed { .. } => source.reader(),
|
||||
AssetServerMode::Processed { .. } => source.processed_reader()?,
|
||||
};
|
||||
let reader = asset_reader.read(asset_path.path()).await?;
|
||||
match asset_reader.read_meta_bytes(asset_path.path()).await {
|
||||
Ok(meta_bytes) => {
|
||||
// TODO: this isn't fully minimal yet. we only need the loader
|
||||
let minimal: AssetMetaMinimal = ron::de::from_bytes(&meta_bytes).map_err(|e| {
|
||||
AssetLoadError::DeserializeMeta(DeserializeMetaError::DeserializeMinimal(e))
|
||||
AssetLoadError::DeserializeMeta {
|
||||
path: asset_path.clone_owned(),
|
||||
error: Box::new(DeserializeMetaError::DeserializeMinimal(e)),
|
||||
}
|
||||
})?;
|
||||
let loader_name = match minimal.asset {
|
||||
AssetActionMinimal::Load { loader } => loader,
|
||||
AssetActionMinimal::Process { .. } => {
|
||||
return Err(AssetLoadError::CannotLoadProcessedAsset {
|
||||
path: asset_path.clone().into_owned(),
|
||||
path: asset_path.clone_owned(),
|
||||
})
|
||||
}
|
||||
AssetActionMinimal::Ignore => {
|
||||
return Err(AssetLoadError::CannotLoadIgnoredAsset {
|
||||
path: asset_path.clone().into_owned(),
|
||||
path: asset_path.clone_owned(),
|
||||
})
|
||||
}
|
||||
};
|
||||
let loader = self.get_asset_loader_with_type_name(&loader_name).await?;
|
||||
let meta = loader.deserialize_meta(&meta_bytes).map_err(|_| {
|
||||
AssetLoadError::CannotLoadDependency {
|
||||
path: asset_path.clone().into_owned(),
|
||||
let meta = loader.deserialize_meta(&meta_bytes).map_err(|e| {
|
||||
AssetLoadError::DeserializeMeta {
|
||||
path: asset_path.clone_owned(),
|
||||
error: Box::new(e),
|
||||
}
|
||||
})?;
|
||||
|
||||
|
@ -693,13 +734,16 @@ impl AssetServer {
|
|||
populate_hashes: bool,
|
||||
) -> Result<ErasedLoadedAsset, AssetLoadError> {
|
||||
// TODO: experiment with this
|
||||
let asset_path = asset_path.clone().into_owned();
|
||||
let asset_path = asset_path.clone_owned();
|
||||
let load_context =
|
||||
LoadContext::new(self, asset_path.clone(), load_dependencies, populate_hashes);
|
||||
loader
|
||||
.load(reader, meta, load_context)
|
||||
.await
|
||||
.map_err(|_| AssetLoadError::CannotLoadDependency { path: asset_path })
|
||||
loader.load(reader, meta, load_context).await.map_err(|e| {
|
||||
AssetLoadError::AssetLoaderError {
|
||||
path: asset_path.clone_owned(),
|
||||
loader_name: loader.type_name(),
|
||||
error: e,
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -742,17 +786,36 @@ pub fn handle_internal_asset_events(world: &mut World) {
|
|||
}
|
||||
|
||||
let mut paths_to_reload = HashSet::new();
|
||||
for event in server.data.source_event_receiver.try_iter() {
|
||||
let mut handle_event = |source: AssetSourceId<'static>, event: AssetSourceEvent| {
|
||||
match event {
|
||||
// TODO: if the asset was processed and the processed file was changed, the first modified event
|
||||
// should be skipped?
|
||||
AssetSourceEvent::ModifiedAsset(path) | AssetSourceEvent::ModifiedMeta(path) => {
|
||||
let path = AssetPath::from_path(path);
|
||||
let path = AssetPath::from(path).with_source(source);
|
||||
queue_ancestors(&path, &infos, &mut paths_to_reload);
|
||||
paths_to_reload.insert(path);
|
||||
}
|
||||
_ => {}
|
||||
}
|
||||
};
|
||||
|
||||
for source in server.data.sources.iter() {
|
||||
match server.data.mode {
|
||||
AssetServerMode::Unprocessed { .. } => {
|
||||
if let Some(receiver) = source.event_receiver() {
|
||||
for event in receiver.try_iter() {
|
||||
handle_event(source.id(), event);
|
||||
}
|
||||
}
|
||||
}
|
||||
AssetServerMode::Processed { .. } => {
|
||||
if let Some(receiver) = source.processed_event_receiver() {
|
||||
for event in receiver.try_iter() {
|
||||
handle_event(source.id(), event);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
for path in paths_to_reload {
|
||||
|
@ -848,16 +911,27 @@ pub enum AssetLoadError {
|
|||
MissingAssetLoaderForTypeName(#[from] MissingAssetLoaderForTypeNameError),
|
||||
#[error(transparent)]
|
||||
AssetReaderError(#[from] AssetReaderError),
|
||||
#[error(transparent)]
|
||||
MissingAssetSourceError(#[from] MissingAssetSourceError),
|
||||
#[error(transparent)]
|
||||
MissingProcessedAssetReaderError(#[from] MissingProcessedAssetReaderError),
|
||||
#[error("Encountered an error while reading asset metadata bytes")]
|
||||
AssetMetaReadError,
|
||||
#[error(transparent)]
|
||||
DeserializeMeta(DeserializeMetaError),
|
||||
#[error("Failed to deserialize meta for asset {path}: {error}")]
|
||||
DeserializeMeta {
|
||||
path: AssetPath<'static>,
|
||||
error: Box<DeserializeMetaError>,
|
||||
},
|
||||
#[error("Asset '{path}' is configured to be processed. It cannot be loaded directly.")]
|
||||
CannotLoadProcessedAsset { path: AssetPath<'static> },
|
||||
#[error("Asset '{path}' is configured to be ignored. It cannot be loaded.")]
|
||||
CannotLoadIgnoredAsset { path: AssetPath<'static> },
|
||||
#[error("Asset '{path}' is a dependency. It cannot be loaded directly.")]
|
||||
CannotLoadDependency { path: AssetPath<'static> },
|
||||
#[error("Failed to load asset '{path}' with asset loader '{loader_name}': {error}")]
|
||||
AssetLoaderError {
|
||||
path: AssetPath<'static>,
|
||||
loader_name: &'static str,
|
||||
error: Box<dyn std::error::Error + Send + Sync + 'static>,
|
||||
},
|
||||
}
|
||||
|
||||
/// An error that occurs when an [`AssetLoader`] is not registered for a given extension.
|
||||
|
@ -893,8 +967,3 @@ impl std::fmt::Debug for AssetServer {
|
|||
.finish()
|
||||
}
|
||||
}
|
||||
|
||||
pub(crate) static CANNOT_WATCH_ERROR_MESSAGE: &str =
|
||||
"Cannot watch for changes because the current `AssetReader` does not support it. If you are using \
|
||||
the FileAssetReader (the default on desktop platforms), enabling the filesystem_watcher feature will \
|
||||
add this functionality.";
|
||||
|
|
|
@ -1206,7 +1206,7 @@ async fn load_buffers(
|
|||
Err(()) => {
|
||||
// TODO: Remove this and add dep
|
||||
let buffer_path = load_context.path().parent().unwrap().join(uri);
|
||||
load_context.read_asset_bytes(&buffer_path).await?
|
||||
load_context.read_asset_bytes(buffer_path).await?
|
||||
}
|
||||
};
|
||||
buffer_data.push(buffer_bytes);
|
||||
|
|
|
@ -103,7 +103,10 @@ glam_assert = ["bevy_math/glam_assert"]
|
|||
default_font = ["bevy_text?/default_font"]
|
||||
|
||||
# Enables watching the filesystem for Bevy Asset hot-reloading
|
||||
filesystem_watcher = ["bevy_asset?/filesystem_watcher"]
|
||||
file_watcher = ["bevy_asset?/file_watcher"]
|
||||
|
||||
# Enables watching embedded files for Bevy Asset hot-reloading
|
||||
embedded_watcher = ["bevy_asset?/embedded_watcher"]
|
||||
|
||||
[dependencies]
|
||||
# bevy
|
||||
|
|
|
@ -12,7 +12,7 @@ pub struct TaskPoolBuilder {}
|
|||
/// This is a dummy struct for wasm support to provide the same api as with the multithreaded
|
||||
/// task pool. In the case of the multithreaded task pool this struct is used to spawn
|
||||
/// tasks on a specific thread. But the wasm task pool just calls
|
||||
/// [`wasm_bindgen_futures::spawn_local`] for spawning which just runs tasks on the main thread
|
||||
/// `wasm_bindgen_futures::spawn_local` for spawning which just runs tasks on the main thread
|
||||
/// and so the [`ThreadExecutor`] does nothing.
|
||||
#[derive(Default)]
|
||||
pub struct ThreadExecutor<'a>(PhantomData<&'a ()>);
|
||||
|
@ -159,7 +159,7 @@ impl TaskPool {
|
|||
FakeTask
|
||||
}
|
||||
|
||||
/// Spawns a static future on the JS event loop. This is exactly the same as [`TaskSpool::spawn`].
|
||||
/// Spawns a static future on the JS event loop. This is exactly the same as [`TaskPool::spawn`].
|
||||
pub fn spawn_local<T>(&self, future: impl Future<Output = T> + 'static) -> FakeTask
|
||||
where
|
||||
T: 'static,
|
||||
|
|
|
@ -42,7 +42,7 @@ where
|
|||
&'a T: Into<Arc<T>>,
|
||||
{
|
||||
/// Converts this into an "owned" value. If internally a value is borrowed, it will be cloned into an "owned [`Arc`]".
|
||||
/// If it is already an "owned [`Arc`]", it will remain unchanged.
|
||||
/// If it is already a [`CowArc::Owned`] or a [`CowArc::Static`], it will remain unchanged.
|
||||
#[inline]
|
||||
pub fn into_owned(self) -> CowArc<'static, T> {
|
||||
match self {
|
||||
|
@ -51,6 +51,14 @@ where
|
|||
CowArc::Owned(value) => CowArc::Owned(value),
|
||||
}
|
||||
}
|
||||
|
||||
/// Clones into an owned [`CowArc<'static>`]. If internally a value is borrowed, it will be cloned into an "owned [`Arc`]".
|
||||
/// If it is already a [`CowArc::Owned`] or [`CowArc::Static`], the value will be cloned.
|
||||
/// This is equivalent to `.clone().into_owned()`.
|
||||
#[inline]
|
||||
pub fn clone_owned(&self) -> CowArc<'static, T> {
|
||||
self.clone().into_owned()
|
||||
}
|
||||
}
|
||||
|
||||
impl<'a, T: ?Sized> Clone for CowArc<'a, T> {
|
||||
|
|
|
@ -51,8 +51,9 @@ The default feature set enables most of the expected features of a game engine,
|
|||
|dds|DDS compressed texture support|
|
||||
|detailed_trace|Enable detailed trace event logging. These trace events are expensive even when off, thus they require compile time opt-in|
|
||||
|dynamic_linking|Force dynamic linking, which improves iterative compile times|
|
||||
|embedded_watcher|Enables watching in memory asset providers for Bevy Asset hot-reloading|
|
||||
|exr|EXR image format support|
|
||||
|filesystem_watcher|Enables watching the filesystem for Bevy Asset hot-reloading|
|
||||
|file_watcher|Enables watching the filesystem for Bevy Asset hot-reloading|
|
||||
|flac|FLAC audio format support|
|
||||
|glam_assert|Enable assertions to check the validity of parameters passed to glam|
|
||||
|jpeg|JPEG image format support|
|
||||
|
|
|
@ -4,7 +4,7 @@
|
|||
|
||||
use bevy::{
|
||||
asset::io::{
|
||||
file::FileAssetReader, AssetProvider, AssetProviders, AssetReader, AssetReaderError,
|
||||
file::FileAssetReader, AssetReader, AssetReaderError, AssetSource, AssetSourceId,
|
||||
PathStream, Reader,
|
||||
},
|
||||
prelude::*,
|
||||
|
@ -43,13 +43,6 @@ impl<T: AssetReader> AssetReader for CustomAssetReader<T> {
|
|||
) -> BoxedFuture<'a, Result<bool, AssetReaderError>> {
|
||||
self.0.is_directory(path)
|
||||
}
|
||||
|
||||
fn watch_for_changes(
|
||||
&self,
|
||||
event_sender: crossbeam_channel::Sender<bevy_internal::asset::io::AssetSourceEvent>,
|
||||
) -> Option<Box<dyn bevy_internal::asset::io::AssetWatcher>> {
|
||||
self.0.watch_for_changes(event_sender)
|
||||
}
|
||||
}
|
||||
|
||||
/// A plugins that registers our new asset reader
|
||||
|
@ -57,24 +50,17 @@ struct CustomAssetReaderPlugin;
|
|||
|
||||
impl Plugin for CustomAssetReaderPlugin {
|
||||
fn build(&self, app: &mut App) {
|
||||
let mut asset_providers = app
|
||||
.world
|
||||
.get_resource_or_insert_with::<AssetProviders>(Default::default);
|
||||
asset_providers.insert_reader("CustomAssetReader", || {
|
||||
Box::new(CustomAssetReader(FileAssetReader::new("assets")))
|
||||
});
|
||||
app.register_asset_source(
|
||||
AssetSourceId::Default,
|
||||
AssetSource::build()
|
||||
.with_reader(|| Box::new(CustomAssetReader(FileAssetReader::new("assets")))),
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
fn main() {
|
||||
App::new()
|
||||
.add_plugins((
|
||||
CustomAssetReaderPlugin,
|
||||
DefaultPlugins.set(AssetPlugin::Unprocessed {
|
||||
source: AssetProvider::Custom("CustomAssetReader".to_string()),
|
||||
watch_for_changes: false,
|
||||
}),
|
||||
))
|
||||
.add_plugins((CustomAssetReaderPlugin, DefaultPlugins))
|
||||
.add_systems(Startup, setup)
|
||||
.run();
|
||||
}
|
||||
|
|
|
@ -1,12 +1,15 @@
|
|||
//! Hot reloading allows you to modify assets files to be immediately reloaded while your game is
|
||||
//! running. This lets you immediately see the results of your changes without restarting the game.
|
||||
//! This example illustrates hot reloading mesh changes.
|
||||
//!
|
||||
//! Note that hot asset reloading requires the [`AssetWatcher`](bevy::asset::io::AssetWatcher) to be enabled
|
||||
//! for your current platform. For desktop platforms, enable the `file_watcher` cargo feature.
|
||||
|
||||
use bevy::prelude::*;
|
||||
|
||||
fn main() {
|
||||
App::new()
|
||||
.add_plugins(DefaultPlugins.set(AssetPlugin::default().watch_for_changes()))
|
||||
.add_plugins(DefaultPlugins)
|
||||
.add_systems(Startup, setup)
|
||||
.run();
|
||||
}
|
||||
|
|
1
examples/asset/processing/e.txt
Normal file
1
examples/asset/processing/e.txt
Normal file
|
@ -0,0 +1 @@
|
|||
e
|
|
@ -2,7 +2,8 @@
|
|||
|
||||
use bevy::{
|
||||
asset::{
|
||||
io::{AssetProviders, Reader, Writer},
|
||||
embedded_asset,
|
||||
io::{Reader, Writer},
|
||||
processor::LoadAndSave,
|
||||
saver::{AssetSaver, SavedAsset},
|
||||
AssetLoader, AsyncReadExt, AsyncWriteExt, LoadContext,
|
||||
|
@ -16,15 +17,6 @@ use thiserror::Error;
|
|||
|
||||
fn main() {
|
||||
App::new()
|
||||
.insert_resource(
|
||||
// This is just overriding the default paths to scope this to the correct example folder
|
||||
// You can generally skip this in your own projects
|
||||
AssetProviders::default()
|
||||
.with_default_file_source("examples/asset/processing/assets".to_string())
|
||||
.with_default_file_destination(
|
||||
"examples/asset/processing/imported_assets".to_string(),
|
||||
),
|
||||
)
|
||||
// Enabling `processed_dev` will configure the AssetPlugin to use asset processing.
|
||||
// This will run the AssetProcessor in the background, which will listen for changes to
|
||||
// the `assets` folder, run them through configured asset processors, and write the results
|
||||
|
@ -32,9 +24,18 @@ fn main() {
|
|||
//
|
||||
// The AssetProcessor will create `.meta` files automatically for assets in the `assets` folder,
|
||||
// which can then be used to configure how the asset will be processed.
|
||||
.add_plugins((DefaultPlugins.set(AssetPlugin::processed_dev()), TextPlugin))
|
||||
// This is what a deployed app should use
|
||||
// .add_plugins((DefaultPlugins.set(AssetPlugin::processed()), TextPlugin))
|
||||
.add_plugins((
|
||||
DefaultPlugins.set(AssetPlugin {
|
||||
// This is just overriding the default paths to scope this to the correct example folder
|
||||
// You can generally skip this in your own projects
|
||||
mode: AssetMode::ProcessedDev,
|
||||
file_path: "examples/asset/processing/assets".to_string(),
|
||||
processed_file_path: "examples/asset/processing/imported_assets/Default"
|
||||
.to_string(),
|
||||
..default()
|
||||
}),
|
||||
TextPlugin,
|
||||
))
|
||||
.add_systems(Startup, setup)
|
||||
.add_systems(Update, print_text)
|
||||
.run();
|
||||
|
@ -51,6 +52,7 @@ pub struct TextPlugin;
|
|||
|
||||
impl Plugin for TextPlugin {
|
||||
fn build(&self, app: &mut App) {
|
||||
embedded_asset!(app, "examples/asset/processing/", "e.txt");
|
||||
app.init_asset::<CoolText>()
|
||||
.init_asset::<Text>()
|
||||
.register_asset_loader(CoolTextLoader)
|
||||
|
@ -199,6 +201,7 @@ struct TextAssets {
|
|||
b: Handle<Text>,
|
||||
c: Handle<Text>,
|
||||
d: Handle<Text>,
|
||||
e: Handle<Text>,
|
||||
}
|
||||
|
||||
fn setup(mut commands: Commands, assets: Res<AssetServer>) {
|
||||
|
@ -209,6 +212,7 @@ fn setup(mut commands: Commands, assets: Res<AssetServer>) {
|
|||
b: assets.load("foo/b.cool.ron"),
|
||||
c: assets.load("foo/c.cool.ron"),
|
||||
d: assets.load("d.cool.ron"),
|
||||
e: assets.load("embedded://asset_processing/e.txt"),
|
||||
});
|
||||
}
|
||||
|
||||
|
@ -220,6 +224,7 @@ fn print_text(handles: Res<TextAssets>, texts: Res<Assets<Text>>) {
|
|||
println!(" b: {:?}", texts.get(&handles.b));
|
||||
println!(" c: {:?}", texts.get(&handles.c));
|
||||
println!(" d: {:?}", texts.get(&handles.d));
|
||||
println!(" e: {:?}", texts.get(&handles.e));
|
||||
println!("(You can modify source assets and their .meta files to hot-reload changes!)");
|
||||
println!();
|
||||
}
|
||||
|
|
|
@ -4,7 +4,7 @@ use std::{fs::File, io::Write};
|
|||
|
||||
fn main() {
|
||||
App::new()
|
||||
.add_plugins(DefaultPlugins.set(AssetPlugin::default().watch_for_changes()))
|
||||
.add_plugins(DefaultPlugins)
|
||||
.register_type::<ComponentA>()
|
||||
.register_type::<ComponentB>()
|
||||
.register_type::<ResourceA>()
|
||||
|
@ -75,7 +75,8 @@ fn load_scene_system(mut commands: Commands, asset_server: Res<AssetServer>) {
|
|||
}
|
||||
|
||||
// This system logs all ComponentA components in our world. Try making a change to a ComponentA in
|
||||
// load_scene_example.scn. You should immediately see the changes appear in the console.
|
||||
// load_scene_example.scn. If you enable the `file_watcher` cargo feature you should immediately see
|
||||
// the changes appear in the console whenever you make a change.
|
||||
fn log_system(
|
||||
query: Query<(Entity, &ComponentA), Changed<ComponentA>>,
|
||||
res: Option<Res<ResourceA>>,
|
||||
|
|
|
@ -36,10 +36,7 @@ use bevy::{
|
|||
|
||||
fn main() {
|
||||
App::new()
|
||||
.add_plugins((
|
||||
DefaultPlugins.set(AssetPlugin::default().watch_for_changes()),
|
||||
PostProcessPlugin,
|
||||
))
|
||||
.add_plugins((DefaultPlugins, PostProcessPlugin))
|
||||
.add_systems(Startup, setup)
|
||||
.add_systems(Update, (rotate, update_settings))
|
||||
.run();
|
||||
|
|
|
@ -4,9 +4,10 @@
|
|||
//! replacing the path as appropriate.
|
||||
//! In case of multiple scenes, you can select which to display by adapting the file path: `/path/to/model.gltf#Scene1`.
|
||||
//! With no arguments it will load the `FlightHelmet` glTF model from the repository assets subdirectory.
|
||||
//!
|
||||
//! If you want to hot reload asset changes, enable the `file_watcher` cargo feature.
|
||||
|
||||
use bevy::{
|
||||
asset::io::AssetProviders,
|
||||
math::Vec3A,
|
||||
prelude::*,
|
||||
render::primitives::{Aabb, Sphere},
|
||||
|
@ -29,9 +30,6 @@ fn main() {
|
|||
color: Color::WHITE,
|
||||
brightness: 1.0 / 5.0f32,
|
||||
})
|
||||
.insert_resource(AssetProviders::default().with_default_file_source(
|
||||
std::env::var("CARGO_MANIFEST_DIR").unwrap_or_else(|_| ".".to_string()),
|
||||
))
|
||||
.add_plugins((
|
||||
DefaultPlugins
|
||||
.set(WindowPlugin {
|
||||
|
@ -41,7 +39,10 @@ fn main() {
|
|||
}),
|
||||
..default()
|
||||
})
|
||||
.set(AssetPlugin::default().watch_for_changes()),
|
||||
.set(AssetPlugin {
|
||||
file_path: std::env::var("CARGO_MANIFEST_DIR").unwrap_or_else(|_| ".".to_string()),
|
||||
..default()
|
||||
}),
|
||||
CameraControllerPlugin,
|
||||
SceneViewerPlugin,
|
||||
MorphViewerPlugin,
|
||||
|
|
Loading…
Reference in a new issue