bevy/crates/bevy_audio/src/sinks.rs

202 lines
5.7 KiB
Rust
Raw Normal View History

bevy_audio: ECS-based API redesign (#8424) # Objective Improve the `bevy_audio` API to make it more user-friendly and ECS-idiomatic. This PR is a first-pass at addressing some of the most obvious (to me) problems. In the interest of keeping the scope small, further improvements can be done in future PRs. The current `bevy_audio` API is very clunky to work with, due to how it (ab)uses bevy assets to represent audio sinks. The user needs to write a lot of boilerplate (accessing `Res<Assets<AudioSink>>`) and deal with a lot of cognitive overhead (worry about strong vs. weak handles, etc.) in order to control audio playback. Audio playback is initiated via a centralized `Audio` resource, which makes it difficult to keep track of many different sounds playing in a typical game. Further, everything carries a generic type parameter for the sound source type, making it difficult to mix custom sound sources (such as procedurally generated audio or unofficial formats) with regular audio assets. Let's fix these issues. ## Solution Refactor `bevy_audio` to a more idiomatic ECS API. Remove the `Audio` resource. Do everything via entities and components instead. Audio playback data is now stored in components: - `PlaybackSettings`, `SpatialSettings`, `Handle<AudioSource>` are now components. The user inserts them to tell Bevy to play a sound and configure the initial playback parameters. - `AudioSink`, `SpatialAudioSink` are now components instead of special magical "asset" types. They are inserted by Bevy when it actually begins playing the sound, and can be queried for by the user in order to control the sound during playback. Bundles: `AudioBundle` and `SpatialAudioBundle` are available to make it easy for users to play sounds. Spawn an entity with one of these bundles (or insert them to a complex entity alongside other stuff) to play a sound. Each entity represents a sound to be played. There is also a new "auto-despawn" feature (activated using `PlaybackSettings`), which, if enabled, tells Bevy to despawn entities when the sink playback finishes. This allows for "fire-and-forget" sound playback. Users can simply spawn entities whenever they want to play sounds and not have to worry about leaking memory. ## Unsolved Questions I think the current design is *fine*. I'd be happy for it to be merged. It has some possibly-surprising usability pitfalls, but I think it is still much better than the old `bevy_audio`. Here are some discussion questions for things that we could further improve. I'm undecided on these questions, which is why I didn't implement them. We should decide which of these should be addressed in this PR, and what should be left for future PRs. Or if they should be addressed at all. ### What happens when sounds start playing? Currently, the audio sink components are inserted and the bundle components are kept. Should Bevy remove the bundle components? Something else? The current design allows an entity to be reused for playing the same sound with the same parameters repeatedly. This is a niche use case I'd like to be supported, but if we have to give it up for a simpler design, I'd be fine with that. ### What happens if users remove any of the components themselves? As described above, currently, entities can be reused. Removing the audio sink causes it to be "detached" (I kept the old `Drop` impl), so the sound keeps playing. However, if the audio bundle components are not removed, Bevy will detect this entity as a "queued" sound entity again (has the bundle compoenents, without a sink component), just like before playing the sound the first time, and start playing the sound again. This behavior might be surprising? Should we do something different? ### Should mutations to `PlaybackSettings` be applied to the audio sink? We currently do not do that. `PlaybackSettings` is just for the initial settings when the sound starts playing. This is clearly documented. Do we want to keep this behavior, or do we want to allow users to use `PlaybackSettings` instead of `AudioSink`/`SpatialAudioSink` to control sounds during playback too? I think I prefer for them to be kept separate. It is not a bad mental model once you understand it, and it is documented. ### Should `AudioSink` and `SpatialAudioSink` be unified into a single component type? They provide a similar API (via the `AudioSinkPlayback` trait) and it might be annoying for users to have to deal with both of them. The unification could be done using an enum that is matched on internally by the methods. Spatial audio has extra features, so this might make it harder to access. I think we shouldn't. ### Automatic synchronization of spatial sound properties from Transforms? Should Bevy automatically apply changes to Transforms to spatial audio entities? How do we distinguish between listener and emitter? Which one does the transform represent? Where should the other one come from? Alternatively, leave this problem for now, and address it in a future PR. Or do nothing, and let users deal with it, as shown in the `spatial_audio_2d` and `spatial_audio_3d` examples. --- ## Changelog Added: - `AudioBundle`/`SpatialAudioBundle`, add them to entities to play sounds. Removed: - The `Audio` resource. - `AudioOutput` is no longer `pub`. Changed: - `AudioSink`, `SpatialAudioSink` are now components instead of assets. ## Migration Guide // TODO: write a more detailed migration guide, after the "unsolved questions" are answered and this PR is finalized. Before: ```rust /// Need to store handles somewhere #[derive(Resource)] struct MyMusic { sink: Handle<AudioSink>, } fn play_music( asset_server: Res<AssetServer>, audio: Res<Audio>, audio_sinks: Res<Assets<AudioSink>>, mut commands: Commands, ) { let weak_handle = audio.play_with_settings( asset_server.load("music.ogg"), PlaybackSettings::LOOP.with_volume(0.5), ); // upgrade to strong handle and store it commands.insert_resource(MyMusic { sink: audio_sinks.get_handle(weak_handle), }); } fn toggle_pause_music( audio_sinks: Res<Assets<AudioSink>>, mymusic: Option<Res<MyMusic>>, ) { if let Some(mymusic) = &mymusic { if let Some(sink) = audio_sinks.get(&mymusic.sink) { sink.toggle(); } } } ``` Now: ```rust /// Marker component for our music entity #[derive(Component)] struct MyMusic; fn play_music( mut commands: Commands, asset_server: Res<AssetServer>, ) { commands.spawn(( AudioBundle::from_audio_source(asset_server.load("music.ogg")) .with_settings(PlaybackSettings::LOOP.with_volume(0.5)), MyMusic, )); } fn toggle_pause_music( // `AudioSink` will be inserted by Bevy when the audio starts playing query_music: Query<&AudioSink, With<MyMusic>>, ) { if let Ok(sink) = query.get_single() { sink.toggle(); } } ```
2023-07-07 23:01:17 +00:00
use bevy_ecs::component::Component;
use bevy_math::Vec3;
use bevy_transform::prelude::Transform;
use rodio::{Sink, SpatialSink};
/// Common interactions with an audio sink.
pub trait AudioSinkPlayback {
/// Gets the volume of the sound.
///
/// The value `1.0` is the "normal" volume (unfiltered input). Any value other than `1.0`
/// will multiply each sample by this value.
fn volume(&self) -> f32;
/// Changes the volume of the sound.
///
/// The value `1.0` is the "normal" volume (unfiltered input). Any value other than `1.0`
/// will multiply each sample by this value.
///
/// # Note on Audio Volume
///
/// An increase of 10 decibels (dB) roughly corresponds to the perceived volume doubling in intensity.
/// As this function scales not the volume but the amplitude, a conversion might be necessary.
/// For example, to halve the perceived volume you need to decrease the volume by 10 dB.
/// This corresponds to 20log(x) = -10dB, solving x = 10^(-10/20) = 0.316.
/// Multiply the current volume by 0.316 to halve the perceived volume.
fn set_volume(&self, volume: f32);
/// Gets the speed of the sound.
///
/// The value `1.0` is the "normal" speed (unfiltered input). Any value other than `1.0`
/// will change the play speed of the sound.
fn speed(&self) -> f32;
/// Changes the speed of the sound.
///
/// The value `1.0` is the "normal" speed (unfiltered input). Any value other than `1.0`
/// will change the play speed of the sound.
fn set_speed(&self, speed: f32);
/// Resumes playback of a paused sink.
///
/// No effect if not paused.
fn play(&self);
/// Pauses playback of this sink.
///
/// No effect if already paused.
/// A paused sink can be resumed with [`play`](Self::play).
fn pause(&self);
/// Toggles the playback of this sink.
///
/// Will pause if playing, and will be resumed if paused.
fn toggle(&self) {
if self.is_paused() {
self.play();
} else {
self.pause();
}
}
/// Is this sink paused?
///
/// Sinks can be paused and resumed using [`pause`](Self::pause) and [`play`](Self::play).
fn is_paused(&self) -> bool;
/// Stops the sink.
///
/// It won't be possible to restart it afterwards.
fn stop(&self);
/// Returns true if this sink has no more sounds to play.
fn empty(&self) -> bool;
}
bevy_audio: ECS-based API redesign (#8424) # Objective Improve the `bevy_audio` API to make it more user-friendly and ECS-idiomatic. This PR is a first-pass at addressing some of the most obvious (to me) problems. In the interest of keeping the scope small, further improvements can be done in future PRs. The current `bevy_audio` API is very clunky to work with, due to how it (ab)uses bevy assets to represent audio sinks. The user needs to write a lot of boilerplate (accessing `Res<Assets<AudioSink>>`) and deal with a lot of cognitive overhead (worry about strong vs. weak handles, etc.) in order to control audio playback. Audio playback is initiated via a centralized `Audio` resource, which makes it difficult to keep track of many different sounds playing in a typical game. Further, everything carries a generic type parameter for the sound source type, making it difficult to mix custom sound sources (such as procedurally generated audio or unofficial formats) with regular audio assets. Let's fix these issues. ## Solution Refactor `bevy_audio` to a more idiomatic ECS API. Remove the `Audio` resource. Do everything via entities and components instead. Audio playback data is now stored in components: - `PlaybackSettings`, `SpatialSettings`, `Handle<AudioSource>` are now components. The user inserts them to tell Bevy to play a sound and configure the initial playback parameters. - `AudioSink`, `SpatialAudioSink` are now components instead of special magical "asset" types. They are inserted by Bevy when it actually begins playing the sound, and can be queried for by the user in order to control the sound during playback. Bundles: `AudioBundle` and `SpatialAudioBundle` are available to make it easy for users to play sounds. Spawn an entity with one of these bundles (or insert them to a complex entity alongside other stuff) to play a sound. Each entity represents a sound to be played. There is also a new "auto-despawn" feature (activated using `PlaybackSettings`), which, if enabled, tells Bevy to despawn entities when the sink playback finishes. This allows for "fire-and-forget" sound playback. Users can simply spawn entities whenever they want to play sounds and not have to worry about leaking memory. ## Unsolved Questions I think the current design is *fine*. I'd be happy for it to be merged. It has some possibly-surprising usability pitfalls, but I think it is still much better than the old `bevy_audio`. Here are some discussion questions for things that we could further improve. I'm undecided on these questions, which is why I didn't implement them. We should decide which of these should be addressed in this PR, and what should be left for future PRs. Or if they should be addressed at all. ### What happens when sounds start playing? Currently, the audio sink components are inserted and the bundle components are kept. Should Bevy remove the bundle components? Something else? The current design allows an entity to be reused for playing the same sound with the same parameters repeatedly. This is a niche use case I'd like to be supported, but if we have to give it up for a simpler design, I'd be fine with that. ### What happens if users remove any of the components themselves? As described above, currently, entities can be reused. Removing the audio sink causes it to be "detached" (I kept the old `Drop` impl), so the sound keeps playing. However, if the audio bundle components are not removed, Bevy will detect this entity as a "queued" sound entity again (has the bundle compoenents, without a sink component), just like before playing the sound the first time, and start playing the sound again. This behavior might be surprising? Should we do something different? ### Should mutations to `PlaybackSettings` be applied to the audio sink? We currently do not do that. `PlaybackSettings` is just for the initial settings when the sound starts playing. This is clearly documented. Do we want to keep this behavior, or do we want to allow users to use `PlaybackSettings` instead of `AudioSink`/`SpatialAudioSink` to control sounds during playback too? I think I prefer for them to be kept separate. It is not a bad mental model once you understand it, and it is documented. ### Should `AudioSink` and `SpatialAudioSink` be unified into a single component type? They provide a similar API (via the `AudioSinkPlayback` trait) and it might be annoying for users to have to deal with both of them. The unification could be done using an enum that is matched on internally by the methods. Spatial audio has extra features, so this might make it harder to access. I think we shouldn't. ### Automatic synchronization of spatial sound properties from Transforms? Should Bevy automatically apply changes to Transforms to spatial audio entities? How do we distinguish between listener and emitter? Which one does the transform represent? Where should the other one come from? Alternatively, leave this problem for now, and address it in a future PR. Or do nothing, and let users deal with it, as shown in the `spatial_audio_2d` and `spatial_audio_3d` examples. --- ## Changelog Added: - `AudioBundle`/`SpatialAudioBundle`, add them to entities to play sounds. Removed: - The `Audio` resource. - `AudioOutput` is no longer `pub`. Changed: - `AudioSink`, `SpatialAudioSink` are now components instead of assets. ## Migration Guide // TODO: write a more detailed migration guide, after the "unsolved questions" are answered and this PR is finalized. Before: ```rust /// Need to store handles somewhere #[derive(Resource)] struct MyMusic { sink: Handle<AudioSink>, } fn play_music( asset_server: Res<AssetServer>, audio: Res<Audio>, audio_sinks: Res<Assets<AudioSink>>, mut commands: Commands, ) { let weak_handle = audio.play_with_settings( asset_server.load("music.ogg"), PlaybackSettings::LOOP.with_volume(0.5), ); // upgrade to strong handle and store it commands.insert_resource(MyMusic { sink: audio_sinks.get_handle(weak_handle), }); } fn toggle_pause_music( audio_sinks: Res<Assets<AudioSink>>, mymusic: Option<Res<MyMusic>>, ) { if let Some(mymusic) = &mymusic { if let Some(sink) = audio_sinks.get(&mymusic.sink) { sink.toggle(); } } } ``` Now: ```rust /// Marker component for our music entity #[derive(Component)] struct MyMusic; fn play_music( mut commands: Commands, asset_server: Res<AssetServer>, ) { commands.spawn(( AudioBundle::from_audio_source(asset_server.load("music.ogg")) .with_settings(PlaybackSettings::LOOP.with_volume(0.5)), MyMusic, )); } fn toggle_pause_music( // `AudioSink` will be inserted by Bevy when the audio starts playing query_music: Query<&AudioSink, With<MyMusic>>, ) { if let Ok(sink) = query.get_single() { sink.toggle(); } } ```
2023-07-07 23:01:17 +00:00
/// Used to control audio during playback.
///
bevy_audio: ECS-based API redesign (#8424) # Objective Improve the `bevy_audio` API to make it more user-friendly and ECS-idiomatic. This PR is a first-pass at addressing some of the most obvious (to me) problems. In the interest of keeping the scope small, further improvements can be done in future PRs. The current `bevy_audio` API is very clunky to work with, due to how it (ab)uses bevy assets to represent audio sinks. The user needs to write a lot of boilerplate (accessing `Res<Assets<AudioSink>>`) and deal with a lot of cognitive overhead (worry about strong vs. weak handles, etc.) in order to control audio playback. Audio playback is initiated via a centralized `Audio` resource, which makes it difficult to keep track of many different sounds playing in a typical game. Further, everything carries a generic type parameter for the sound source type, making it difficult to mix custom sound sources (such as procedurally generated audio or unofficial formats) with regular audio assets. Let's fix these issues. ## Solution Refactor `bevy_audio` to a more idiomatic ECS API. Remove the `Audio` resource. Do everything via entities and components instead. Audio playback data is now stored in components: - `PlaybackSettings`, `SpatialSettings`, `Handle<AudioSource>` are now components. The user inserts them to tell Bevy to play a sound and configure the initial playback parameters. - `AudioSink`, `SpatialAudioSink` are now components instead of special magical "asset" types. They are inserted by Bevy when it actually begins playing the sound, and can be queried for by the user in order to control the sound during playback. Bundles: `AudioBundle` and `SpatialAudioBundle` are available to make it easy for users to play sounds. Spawn an entity with one of these bundles (or insert them to a complex entity alongside other stuff) to play a sound. Each entity represents a sound to be played. There is also a new "auto-despawn" feature (activated using `PlaybackSettings`), which, if enabled, tells Bevy to despawn entities when the sink playback finishes. This allows for "fire-and-forget" sound playback. Users can simply spawn entities whenever they want to play sounds and not have to worry about leaking memory. ## Unsolved Questions I think the current design is *fine*. I'd be happy for it to be merged. It has some possibly-surprising usability pitfalls, but I think it is still much better than the old `bevy_audio`. Here are some discussion questions for things that we could further improve. I'm undecided on these questions, which is why I didn't implement them. We should decide which of these should be addressed in this PR, and what should be left for future PRs. Or if they should be addressed at all. ### What happens when sounds start playing? Currently, the audio sink components are inserted and the bundle components are kept. Should Bevy remove the bundle components? Something else? The current design allows an entity to be reused for playing the same sound with the same parameters repeatedly. This is a niche use case I'd like to be supported, but if we have to give it up for a simpler design, I'd be fine with that. ### What happens if users remove any of the components themselves? As described above, currently, entities can be reused. Removing the audio sink causes it to be "detached" (I kept the old `Drop` impl), so the sound keeps playing. However, if the audio bundle components are not removed, Bevy will detect this entity as a "queued" sound entity again (has the bundle compoenents, without a sink component), just like before playing the sound the first time, and start playing the sound again. This behavior might be surprising? Should we do something different? ### Should mutations to `PlaybackSettings` be applied to the audio sink? We currently do not do that. `PlaybackSettings` is just for the initial settings when the sound starts playing. This is clearly documented. Do we want to keep this behavior, or do we want to allow users to use `PlaybackSettings` instead of `AudioSink`/`SpatialAudioSink` to control sounds during playback too? I think I prefer for them to be kept separate. It is not a bad mental model once you understand it, and it is documented. ### Should `AudioSink` and `SpatialAudioSink` be unified into a single component type? They provide a similar API (via the `AudioSinkPlayback` trait) and it might be annoying for users to have to deal with both of them. The unification could be done using an enum that is matched on internally by the methods. Spatial audio has extra features, so this might make it harder to access. I think we shouldn't. ### Automatic synchronization of spatial sound properties from Transforms? Should Bevy automatically apply changes to Transforms to spatial audio entities? How do we distinguish between listener and emitter? Which one does the transform represent? Where should the other one come from? Alternatively, leave this problem for now, and address it in a future PR. Or do nothing, and let users deal with it, as shown in the `spatial_audio_2d` and `spatial_audio_3d` examples. --- ## Changelog Added: - `AudioBundle`/`SpatialAudioBundle`, add them to entities to play sounds. Removed: - The `Audio` resource. - `AudioOutput` is no longer `pub`. Changed: - `AudioSink`, `SpatialAudioSink` are now components instead of assets. ## Migration Guide // TODO: write a more detailed migration guide, after the "unsolved questions" are answered and this PR is finalized. Before: ```rust /// Need to store handles somewhere #[derive(Resource)] struct MyMusic { sink: Handle<AudioSink>, } fn play_music( asset_server: Res<AssetServer>, audio: Res<Audio>, audio_sinks: Res<Assets<AudioSink>>, mut commands: Commands, ) { let weak_handle = audio.play_with_settings( asset_server.load("music.ogg"), PlaybackSettings::LOOP.with_volume(0.5), ); // upgrade to strong handle and store it commands.insert_resource(MyMusic { sink: audio_sinks.get_handle(weak_handle), }); } fn toggle_pause_music( audio_sinks: Res<Assets<AudioSink>>, mymusic: Option<Res<MyMusic>>, ) { if let Some(mymusic) = &mymusic { if let Some(sink) = audio_sinks.get(&mymusic.sink) { sink.toggle(); } } } ``` Now: ```rust /// Marker component for our music entity #[derive(Component)] struct MyMusic; fn play_music( mut commands: Commands, asset_server: Res<AssetServer>, ) { commands.spawn(( AudioBundle::from_audio_source(asset_server.load("music.ogg")) .with_settings(PlaybackSettings::LOOP.with_volume(0.5)), MyMusic, )); } fn toggle_pause_music( // `AudioSink` will be inserted by Bevy when the audio starts playing query_music: Query<&AudioSink, With<MyMusic>>, ) { if let Ok(sink) = query.get_single() { sink.toggle(); } } ```
2023-07-07 23:01:17 +00:00
/// Bevy inserts this component onto your entities when it begins playing an audio source.
/// Use [`AudioPlayer`][crate::AudioPlayer] to trigger that to happen.
///
bevy_audio: ECS-based API redesign (#8424) # Objective Improve the `bevy_audio` API to make it more user-friendly and ECS-idiomatic. This PR is a first-pass at addressing some of the most obvious (to me) problems. In the interest of keeping the scope small, further improvements can be done in future PRs. The current `bevy_audio` API is very clunky to work with, due to how it (ab)uses bevy assets to represent audio sinks. The user needs to write a lot of boilerplate (accessing `Res<Assets<AudioSink>>`) and deal with a lot of cognitive overhead (worry about strong vs. weak handles, etc.) in order to control audio playback. Audio playback is initiated via a centralized `Audio` resource, which makes it difficult to keep track of many different sounds playing in a typical game. Further, everything carries a generic type parameter for the sound source type, making it difficult to mix custom sound sources (such as procedurally generated audio or unofficial formats) with regular audio assets. Let's fix these issues. ## Solution Refactor `bevy_audio` to a more idiomatic ECS API. Remove the `Audio` resource. Do everything via entities and components instead. Audio playback data is now stored in components: - `PlaybackSettings`, `SpatialSettings`, `Handle<AudioSource>` are now components. The user inserts them to tell Bevy to play a sound and configure the initial playback parameters. - `AudioSink`, `SpatialAudioSink` are now components instead of special magical "asset" types. They are inserted by Bevy when it actually begins playing the sound, and can be queried for by the user in order to control the sound during playback. Bundles: `AudioBundle` and `SpatialAudioBundle` are available to make it easy for users to play sounds. Spawn an entity with one of these bundles (or insert them to a complex entity alongside other stuff) to play a sound. Each entity represents a sound to be played. There is also a new "auto-despawn" feature (activated using `PlaybackSettings`), which, if enabled, tells Bevy to despawn entities when the sink playback finishes. This allows for "fire-and-forget" sound playback. Users can simply spawn entities whenever they want to play sounds and not have to worry about leaking memory. ## Unsolved Questions I think the current design is *fine*. I'd be happy for it to be merged. It has some possibly-surprising usability pitfalls, but I think it is still much better than the old `bevy_audio`. Here are some discussion questions for things that we could further improve. I'm undecided on these questions, which is why I didn't implement them. We should decide which of these should be addressed in this PR, and what should be left for future PRs. Or if they should be addressed at all. ### What happens when sounds start playing? Currently, the audio sink components are inserted and the bundle components are kept. Should Bevy remove the bundle components? Something else? The current design allows an entity to be reused for playing the same sound with the same parameters repeatedly. This is a niche use case I'd like to be supported, but if we have to give it up for a simpler design, I'd be fine with that. ### What happens if users remove any of the components themselves? As described above, currently, entities can be reused. Removing the audio sink causes it to be "detached" (I kept the old `Drop` impl), so the sound keeps playing. However, if the audio bundle components are not removed, Bevy will detect this entity as a "queued" sound entity again (has the bundle compoenents, without a sink component), just like before playing the sound the first time, and start playing the sound again. This behavior might be surprising? Should we do something different? ### Should mutations to `PlaybackSettings` be applied to the audio sink? We currently do not do that. `PlaybackSettings` is just for the initial settings when the sound starts playing. This is clearly documented. Do we want to keep this behavior, or do we want to allow users to use `PlaybackSettings` instead of `AudioSink`/`SpatialAudioSink` to control sounds during playback too? I think I prefer for them to be kept separate. It is not a bad mental model once you understand it, and it is documented. ### Should `AudioSink` and `SpatialAudioSink` be unified into a single component type? They provide a similar API (via the `AudioSinkPlayback` trait) and it might be annoying for users to have to deal with both of them. The unification could be done using an enum that is matched on internally by the methods. Spatial audio has extra features, so this might make it harder to access. I think we shouldn't. ### Automatic synchronization of spatial sound properties from Transforms? Should Bevy automatically apply changes to Transforms to spatial audio entities? How do we distinguish between listener and emitter? Which one does the transform represent? Where should the other one come from? Alternatively, leave this problem for now, and address it in a future PR. Or do nothing, and let users deal with it, as shown in the `spatial_audio_2d` and `spatial_audio_3d` examples. --- ## Changelog Added: - `AudioBundle`/`SpatialAudioBundle`, add them to entities to play sounds. Removed: - The `Audio` resource. - `AudioOutput` is no longer `pub`. Changed: - `AudioSink`, `SpatialAudioSink` are now components instead of assets. ## Migration Guide // TODO: write a more detailed migration guide, after the "unsolved questions" are answered and this PR is finalized. Before: ```rust /// Need to store handles somewhere #[derive(Resource)] struct MyMusic { sink: Handle<AudioSink>, } fn play_music( asset_server: Res<AssetServer>, audio: Res<Audio>, audio_sinks: Res<Assets<AudioSink>>, mut commands: Commands, ) { let weak_handle = audio.play_with_settings( asset_server.load("music.ogg"), PlaybackSettings::LOOP.with_volume(0.5), ); // upgrade to strong handle and store it commands.insert_resource(MyMusic { sink: audio_sinks.get_handle(weak_handle), }); } fn toggle_pause_music( audio_sinks: Res<Assets<AudioSink>>, mymusic: Option<Res<MyMusic>>, ) { if let Some(mymusic) = &mymusic { if let Some(sink) = audio_sinks.get(&mymusic.sink) { sink.toggle(); } } } ``` Now: ```rust /// Marker component for our music entity #[derive(Component)] struct MyMusic; fn play_music( mut commands: Commands, asset_server: Res<AssetServer>, ) { commands.spawn(( AudioBundle::from_audio_source(asset_server.load("music.ogg")) .with_settings(PlaybackSettings::LOOP.with_volume(0.5)), MyMusic, )); } fn toggle_pause_music( // `AudioSink` will be inserted by Bevy when the audio starts playing query_music: Query<&AudioSink, With<MyMusic>>, ) { if let Ok(sink) = query.get_single() { sink.toggle(); } } ```
2023-07-07 23:01:17 +00:00
/// You can use this component to modify the playback settings while the audio is playing.
///
/// If this component is removed from an entity, and an [`AudioSource`][crate::AudioSource] is
/// attached to that entity, that [`AudioSource`][crate::AudioSource] will start playing. If
/// that source is unchanged, that translates to the audio restarting.
bevy_audio: ECS-based API redesign (#8424) # Objective Improve the `bevy_audio` API to make it more user-friendly and ECS-idiomatic. This PR is a first-pass at addressing some of the most obvious (to me) problems. In the interest of keeping the scope small, further improvements can be done in future PRs. The current `bevy_audio` API is very clunky to work with, due to how it (ab)uses bevy assets to represent audio sinks. The user needs to write a lot of boilerplate (accessing `Res<Assets<AudioSink>>`) and deal with a lot of cognitive overhead (worry about strong vs. weak handles, etc.) in order to control audio playback. Audio playback is initiated via a centralized `Audio` resource, which makes it difficult to keep track of many different sounds playing in a typical game. Further, everything carries a generic type parameter for the sound source type, making it difficult to mix custom sound sources (such as procedurally generated audio or unofficial formats) with regular audio assets. Let's fix these issues. ## Solution Refactor `bevy_audio` to a more idiomatic ECS API. Remove the `Audio` resource. Do everything via entities and components instead. Audio playback data is now stored in components: - `PlaybackSettings`, `SpatialSettings`, `Handle<AudioSource>` are now components. The user inserts them to tell Bevy to play a sound and configure the initial playback parameters. - `AudioSink`, `SpatialAudioSink` are now components instead of special magical "asset" types. They are inserted by Bevy when it actually begins playing the sound, and can be queried for by the user in order to control the sound during playback. Bundles: `AudioBundle` and `SpatialAudioBundle` are available to make it easy for users to play sounds. Spawn an entity with one of these bundles (or insert them to a complex entity alongside other stuff) to play a sound. Each entity represents a sound to be played. There is also a new "auto-despawn" feature (activated using `PlaybackSettings`), which, if enabled, tells Bevy to despawn entities when the sink playback finishes. This allows for "fire-and-forget" sound playback. Users can simply spawn entities whenever they want to play sounds and not have to worry about leaking memory. ## Unsolved Questions I think the current design is *fine*. I'd be happy for it to be merged. It has some possibly-surprising usability pitfalls, but I think it is still much better than the old `bevy_audio`. Here are some discussion questions for things that we could further improve. I'm undecided on these questions, which is why I didn't implement them. We should decide which of these should be addressed in this PR, and what should be left for future PRs. Or if they should be addressed at all. ### What happens when sounds start playing? Currently, the audio sink components are inserted and the bundle components are kept. Should Bevy remove the bundle components? Something else? The current design allows an entity to be reused for playing the same sound with the same parameters repeatedly. This is a niche use case I'd like to be supported, but if we have to give it up for a simpler design, I'd be fine with that. ### What happens if users remove any of the components themselves? As described above, currently, entities can be reused. Removing the audio sink causes it to be "detached" (I kept the old `Drop` impl), so the sound keeps playing. However, if the audio bundle components are not removed, Bevy will detect this entity as a "queued" sound entity again (has the bundle compoenents, without a sink component), just like before playing the sound the first time, and start playing the sound again. This behavior might be surprising? Should we do something different? ### Should mutations to `PlaybackSettings` be applied to the audio sink? We currently do not do that. `PlaybackSettings` is just for the initial settings when the sound starts playing. This is clearly documented. Do we want to keep this behavior, or do we want to allow users to use `PlaybackSettings` instead of `AudioSink`/`SpatialAudioSink` to control sounds during playback too? I think I prefer for them to be kept separate. It is not a bad mental model once you understand it, and it is documented. ### Should `AudioSink` and `SpatialAudioSink` be unified into a single component type? They provide a similar API (via the `AudioSinkPlayback` trait) and it might be annoying for users to have to deal with both of them. The unification could be done using an enum that is matched on internally by the methods. Spatial audio has extra features, so this might make it harder to access. I think we shouldn't. ### Automatic synchronization of spatial sound properties from Transforms? Should Bevy automatically apply changes to Transforms to spatial audio entities? How do we distinguish between listener and emitter? Which one does the transform represent? Where should the other one come from? Alternatively, leave this problem for now, and address it in a future PR. Or do nothing, and let users deal with it, as shown in the `spatial_audio_2d` and `spatial_audio_3d` examples. --- ## Changelog Added: - `AudioBundle`/`SpatialAudioBundle`, add them to entities to play sounds. Removed: - The `Audio` resource. - `AudioOutput` is no longer `pub`. Changed: - `AudioSink`, `SpatialAudioSink` are now components instead of assets. ## Migration Guide // TODO: write a more detailed migration guide, after the "unsolved questions" are answered and this PR is finalized. Before: ```rust /// Need to store handles somewhere #[derive(Resource)] struct MyMusic { sink: Handle<AudioSink>, } fn play_music( asset_server: Res<AssetServer>, audio: Res<Audio>, audio_sinks: Res<Assets<AudioSink>>, mut commands: Commands, ) { let weak_handle = audio.play_with_settings( asset_server.load("music.ogg"), PlaybackSettings::LOOP.with_volume(0.5), ); // upgrade to strong handle and store it commands.insert_resource(MyMusic { sink: audio_sinks.get_handle(weak_handle), }); } fn toggle_pause_music( audio_sinks: Res<Assets<AudioSink>>, mymusic: Option<Res<MyMusic>>, ) { if let Some(mymusic) = &mymusic { if let Some(sink) = audio_sinks.get(&mymusic.sink) { sink.toggle(); } } } ``` Now: ```rust /// Marker component for our music entity #[derive(Component)] struct MyMusic; fn play_music( mut commands: Commands, asset_server: Res<AssetServer>, ) { commands.spawn(( AudioBundle::from_audio_source(asset_server.load("music.ogg")) .with_settings(PlaybackSettings::LOOP.with_volume(0.5)), MyMusic, )); } fn toggle_pause_music( // `AudioSink` will be inserted by Bevy when the audio starts playing query_music: Query<&AudioSink, With<MyMusic>>, ) { if let Ok(sink) = query.get_single() { sink.toggle(); } } ```
2023-07-07 23:01:17 +00:00
#[derive(Component)]
pub struct AudioSink {
pub(crate) sink: Sink,
}
impl AudioSinkPlayback for AudioSink {
fn volume(&self) -> f32 {
self.sink.volume()
}
fn set_volume(&self, volume: f32) {
self.sink.set_volume(volume);
}
fn speed(&self) -> f32 {
self.sink.speed()
}
fn set_speed(&self, speed: f32) {
self.sink.set_speed(speed);
}
fn play(&self) {
self.sink.play();
}
fn pause(&self) {
self.sink.pause();
}
fn is_paused(&self) -> bool {
self.sink.is_paused()
}
fn stop(&self) {
self.sink.stop();
}
fn empty(&self) -> bool {
self.sink.empty()
}
}
bevy_audio: ECS-based API redesign (#8424) # Objective Improve the `bevy_audio` API to make it more user-friendly and ECS-idiomatic. This PR is a first-pass at addressing some of the most obvious (to me) problems. In the interest of keeping the scope small, further improvements can be done in future PRs. The current `bevy_audio` API is very clunky to work with, due to how it (ab)uses bevy assets to represent audio sinks. The user needs to write a lot of boilerplate (accessing `Res<Assets<AudioSink>>`) and deal with a lot of cognitive overhead (worry about strong vs. weak handles, etc.) in order to control audio playback. Audio playback is initiated via a centralized `Audio` resource, which makes it difficult to keep track of many different sounds playing in a typical game. Further, everything carries a generic type parameter for the sound source type, making it difficult to mix custom sound sources (such as procedurally generated audio or unofficial formats) with regular audio assets. Let's fix these issues. ## Solution Refactor `bevy_audio` to a more idiomatic ECS API. Remove the `Audio` resource. Do everything via entities and components instead. Audio playback data is now stored in components: - `PlaybackSettings`, `SpatialSettings`, `Handle<AudioSource>` are now components. The user inserts them to tell Bevy to play a sound and configure the initial playback parameters. - `AudioSink`, `SpatialAudioSink` are now components instead of special magical "asset" types. They are inserted by Bevy when it actually begins playing the sound, and can be queried for by the user in order to control the sound during playback. Bundles: `AudioBundle` and `SpatialAudioBundle` are available to make it easy for users to play sounds. Spawn an entity with one of these bundles (or insert them to a complex entity alongside other stuff) to play a sound. Each entity represents a sound to be played. There is also a new "auto-despawn" feature (activated using `PlaybackSettings`), which, if enabled, tells Bevy to despawn entities when the sink playback finishes. This allows for "fire-and-forget" sound playback. Users can simply spawn entities whenever they want to play sounds and not have to worry about leaking memory. ## Unsolved Questions I think the current design is *fine*. I'd be happy for it to be merged. It has some possibly-surprising usability pitfalls, but I think it is still much better than the old `bevy_audio`. Here are some discussion questions for things that we could further improve. I'm undecided on these questions, which is why I didn't implement them. We should decide which of these should be addressed in this PR, and what should be left for future PRs. Or if they should be addressed at all. ### What happens when sounds start playing? Currently, the audio sink components are inserted and the bundle components are kept. Should Bevy remove the bundle components? Something else? The current design allows an entity to be reused for playing the same sound with the same parameters repeatedly. This is a niche use case I'd like to be supported, but if we have to give it up for a simpler design, I'd be fine with that. ### What happens if users remove any of the components themselves? As described above, currently, entities can be reused. Removing the audio sink causes it to be "detached" (I kept the old `Drop` impl), so the sound keeps playing. However, if the audio bundle components are not removed, Bevy will detect this entity as a "queued" sound entity again (has the bundle compoenents, without a sink component), just like before playing the sound the first time, and start playing the sound again. This behavior might be surprising? Should we do something different? ### Should mutations to `PlaybackSettings` be applied to the audio sink? We currently do not do that. `PlaybackSettings` is just for the initial settings when the sound starts playing. This is clearly documented. Do we want to keep this behavior, or do we want to allow users to use `PlaybackSettings` instead of `AudioSink`/`SpatialAudioSink` to control sounds during playback too? I think I prefer for them to be kept separate. It is not a bad mental model once you understand it, and it is documented. ### Should `AudioSink` and `SpatialAudioSink` be unified into a single component type? They provide a similar API (via the `AudioSinkPlayback` trait) and it might be annoying for users to have to deal with both of them. The unification could be done using an enum that is matched on internally by the methods. Spatial audio has extra features, so this might make it harder to access. I think we shouldn't. ### Automatic synchronization of spatial sound properties from Transforms? Should Bevy automatically apply changes to Transforms to spatial audio entities? How do we distinguish between listener and emitter? Which one does the transform represent? Where should the other one come from? Alternatively, leave this problem for now, and address it in a future PR. Or do nothing, and let users deal with it, as shown in the `spatial_audio_2d` and `spatial_audio_3d` examples. --- ## Changelog Added: - `AudioBundle`/`SpatialAudioBundle`, add them to entities to play sounds. Removed: - The `Audio` resource. - `AudioOutput` is no longer `pub`. Changed: - `AudioSink`, `SpatialAudioSink` are now components instead of assets. ## Migration Guide // TODO: write a more detailed migration guide, after the "unsolved questions" are answered and this PR is finalized. Before: ```rust /// Need to store handles somewhere #[derive(Resource)] struct MyMusic { sink: Handle<AudioSink>, } fn play_music( asset_server: Res<AssetServer>, audio: Res<Audio>, audio_sinks: Res<Assets<AudioSink>>, mut commands: Commands, ) { let weak_handle = audio.play_with_settings( asset_server.load("music.ogg"), PlaybackSettings::LOOP.with_volume(0.5), ); // upgrade to strong handle and store it commands.insert_resource(MyMusic { sink: audio_sinks.get_handle(weak_handle), }); } fn toggle_pause_music( audio_sinks: Res<Assets<AudioSink>>, mymusic: Option<Res<MyMusic>>, ) { if let Some(mymusic) = &mymusic { if let Some(sink) = audio_sinks.get(&mymusic.sink) { sink.toggle(); } } } ``` Now: ```rust /// Marker component for our music entity #[derive(Component)] struct MyMusic; fn play_music( mut commands: Commands, asset_server: Res<AssetServer>, ) { commands.spawn(( AudioBundle::from_audio_source(asset_server.load("music.ogg")) .with_settings(PlaybackSettings::LOOP.with_volume(0.5)), MyMusic, )); } fn toggle_pause_music( // `AudioSink` will be inserted by Bevy when the audio starts playing query_music: Query<&AudioSink, With<MyMusic>>, ) { if let Ok(sink) = query.get_single() { sink.toggle(); } } ```
2023-07-07 23:01:17 +00:00
/// Used to control spatial audio during playback.
///
More ergonomic spatial audio (#9800) # Objective Spatial audio was heroically thrown together at the last minute for Bevy 0.10, but right now it's a bit of a pain to use -- users need to manually update audio sinks with the position of the listener / emitter. Hopefully the migration guide entry speaks for itself. ## Solution Add a new `SpatialListener` component and automatically update sinks with the position of the listener and and emitter. ## Changelog `SpatialAudioSink`s are now automatically updated with positions of emitters and listeners. ## Migration Guide Spatial audio now automatically uses the transform of the `AudioBundle` and of an entity with a `SpatialListener` component. If you were manually scaling emitter/listener positions, you can use the `spatial_scale` field of `AudioPlugin` instead. ```rust // Old commands.spawn( SpatialAudioBundle { source: asset_server.load("sounds/Windless Slopes.ogg"), settings: PlaybackSettings::LOOP, spatial: SpatialSettings::new(listener_position, gap, emitter_position), }, ); fn update( emitter_query: Query<(&Transform, &SpatialAudioSink)>, listener_query: Query<&Transform, With<Listener>>, ) { let listener = listener_query.single(); for (transform, sink) in &emitter_query { sink.set_emitter_position(transform.translation); sink.set_listener_position(*listener, gap); } } // New commands.spawn(( SpatialBundle::from_transform(Transform::from_translation(emitter_position)), AudioBundle { source: asset_server.load("sounds/Windless Slopes.ogg"), settings: PlaybackSettings::LOOP.with_spatial(true), }, )); commands.spawn(( SpatialBundle::from_transform(Transform::from_translation(listener_position)), SpatialListener::new(gap), )); ``` ## Discussion I removed `SpatialAudioBundle` because the `SpatialSettings` component was made mostly redundant, and without that it was identical to `AudioBundle`. `SpatialListener` is a bare component and not a bundle which is feeling like a maybe a strange choice. That happened from a natural aversion both to nested bundles and to duplicating `Transform` etc in bundles and from figuring that it is likely to just be tacked on to some other bundle (player, head, camera) most of the time. Let me know what you think about these things / everything else. --------- Co-authored-by: Mike <mike.hsu@gmail.com>
2023-10-09 19:43:56 +00:00
/// Bevy inserts this component onto your entities when it begins playing an audio source
/// that's configured to use spatial audio.
///
bevy_audio: ECS-based API redesign (#8424) # Objective Improve the `bevy_audio` API to make it more user-friendly and ECS-idiomatic. This PR is a first-pass at addressing some of the most obvious (to me) problems. In the interest of keeping the scope small, further improvements can be done in future PRs. The current `bevy_audio` API is very clunky to work with, due to how it (ab)uses bevy assets to represent audio sinks. The user needs to write a lot of boilerplate (accessing `Res<Assets<AudioSink>>`) and deal with a lot of cognitive overhead (worry about strong vs. weak handles, etc.) in order to control audio playback. Audio playback is initiated via a centralized `Audio` resource, which makes it difficult to keep track of many different sounds playing in a typical game. Further, everything carries a generic type parameter for the sound source type, making it difficult to mix custom sound sources (such as procedurally generated audio or unofficial formats) with regular audio assets. Let's fix these issues. ## Solution Refactor `bevy_audio` to a more idiomatic ECS API. Remove the `Audio` resource. Do everything via entities and components instead. Audio playback data is now stored in components: - `PlaybackSettings`, `SpatialSettings`, `Handle<AudioSource>` are now components. The user inserts them to tell Bevy to play a sound and configure the initial playback parameters. - `AudioSink`, `SpatialAudioSink` are now components instead of special magical "asset" types. They are inserted by Bevy when it actually begins playing the sound, and can be queried for by the user in order to control the sound during playback. Bundles: `AudioBundle` and `SpatialAudioBundle` are available to make it easy for users to play sounds. Spawn an entity with one of these bundles (or insert them to a complex entity alongside other stuff) to play a sound. Each entity represents a sound to be played. There is also a new "auto-despawn" feature (activated using `PlaybackSettings`), which, if enabled, tells Bevy to despawn entities when the sink playback finishes. This allows for "fire-and-forget" sound playback. Users can simply spawn entities whenever they want to play sounds and not have to worry about leaking memory. ## Unsolved Questions I think the current design is *fine*. I'd be happy for it to be merged. It has some possibly-surprising usability pitfalls, but I think it is still much better than the old `bevy_audio`. Here are some discussion questions for things that we could further improve. I'm undecided on these questions, which is why I didn't implement them. We should decide which of these should be addressed in this PR, and what should be left for future PRs. Or if they should be addressed at all. ### What happens when sounds start playing? Currently, the audio sink components are inserted and the bundle components are kept. Should Bevy remove the bundle components? Something else? The current design allows an entity to be reused for playing the same sound with the same parameters repeatedly. This is a niche use case I'd like to be supported, but if we have to give it up for a simpler design, I'd be fine with that. ### What happens if users remove any of the components themselves? As described above, currently, entities can be reused. Removing the audio sink causes it to be "detached" (I kept the old `Drop` impl), so the sound keeps playing. However, if the audio bundle components are not removed, Bevy will detect this entity as a "queued" sound entity again (has the bundle compoenents, without a sink component), just like before playing the sound the first time, and start playing the sound again. This behavior might be surprising? Should we do something different? ### Should mutations to `PlaybackSettings` be applied to the audio sink? We currently do not do that. `PlaybackSettings` is just for the initial settings when the sound starts playing. This is clearly documented. Do we want to keep this behavior, or do we want to allow users to use `PlaybackSettings` instead of `AudioSink`/`SpatialAudioSink` to control sounds during playback too? I think I prefer for them to be kept separate. It is not a bad mental model once you understand it, and it is documented. ### Should `AudioSink` and `SpatialAudioSink` be unified into a single component type? They provide a similar API (via the `AudioSinkPlayback` trait) and it might be annoying for users to have to deal with both of them. The unification could be done using an enum that is matched on internally by the methods. Spatial audio has extra features, so this might make it harder to access. I think we shouldn't. ### Automatic synchronization of spatial sound properties from Transforms? Should Bevy automatically apply changes to Transforms to spatial audio entities? How do we distinguish between listener and emitter? Which one does the transform represent? Where should the other one come from? Alternatively, leave this problem for now, and address it in a future PR. Or do nothing, and let users deal with it, as shown in the `spatial_audio_2d` and `spatial_audio_3d` examples. --- ## Changelog Added: - `AudioBundle`/`SpatialAudioBundle`, add them to entities to play sounds. Removed: - The `Audio` resource. - `AudioOutput` is no longer `pub`. Changed: - `AudioSink`, `SpatialAudioSink` are now components instead of assets. ## Migration Guide // TODO: write a more detailed migration guide, after the "unsolved questions" are answered and this PR is finalized. Before: ```rust /// Need to store handles somewhere #[derive(Resource)] struct MyMusic { sink: Handle<AudioSink>, } fn play_music( asset_server: Res<AssetServer>, audio: Res<Audio>, audio_sinks: Res<Assets<AudioSink>>, mut commands: Commands, ) { let weak_handle = audio.play_with_settings( asset_server.load("music.ogg"), PlaybackSettings::LOOP.with_volume(0.5), ); // upgrade to strong handle and store it commands.insert_resource(MyMusic { sink: audio_sinks.get_handle(weak_handle), }); } fn toggle_pause_music( audio_sinks: Res<Assets<AudioSink>>, mymusic: Option<Res<MyMusic>>, ) { if let Some(mymusic) = &mymusic { if let Some(sink) = audio_sinks.get(&mymusic.sink) { sink.toggle(); } } } ``` Now: ```rust /// Marker component for our music entity #[derive(Component)] struct MyMusic; fn play_music( mut commands: Commands, asset_server: Res<AssetServer>, ) { commands.spawn(( AudioBundle::from_audio_source(asset_server.load("music.ogg")) .with_settings(PlaybackSettings::LOOP.with_volume(0.5)), MyMusic, )); } fn toggle_pause_music( // `AudioSink` will be inserted by Bevy when the audio starts playing query_music: Query<&AudioSink, With<MyMusic>>, ) { if let Ok(sink) = query.get_single() { sink.toggle(); } } ```
2023-07-07 23:01:17 +00:00
/// You can use this component to modify the playback settings while the audio is playing.
///
/// If this component is removed from an entity, and a [`AudioSource`][crate::AudioSource] is
/// attached to that entity, that [`AudioSource`][crate::AudioSource] will start playing. If
/// that source is unchanged, that translates to the audio restarting.
bevy_audio: ECS-based API redesign (#8424) # Objective Improve the `bevy_audio` API to make it more user-friendly and ECS-idiomatic. This PR is a first-pass at addressing some of the most obvious (to me) problems. In the interest of keeping the scope small, further improvements can be done in future PRs. The current `bevy_audio` API is very clunky to work with, due to how it (ab)uses bevy assets to represent audio sinks. The user needs to write a lot of boilerplate (accessing `Res<Assets<AudioSink>>`) and deal with a lot of cognitive overhead (worry about strong vs. weak handles, etc.) in order to control audio playback. Audio playback is initiated via a centralized `Audio` resource, which makes it difficult to keep track of many different sounds playing in a typical game. Further, everything carries a generic type parameter for the sound source type, making it difficult to mix custom sound sources (such as procedurally generated audio or unofficial formats) with regular audio assets. Let's fix these issues. ## Solution Refactor `bevy_audio` to a more idiomatic ECS API. Remove the `Audio` resource. Do everything via entities and components instead. Audio playback data is now stored in components: - `PlaybackSettings`, `SpatialSettings`, `Handle<AudioSource>` are now components. The user inserts them to tell Bevy to play a sound and configure the initial playback parameters. - `AudioSink`, `SpatialAudioSink` are now components instead of special magical "asset" types. They are inserted by Bevy when it actually begins playing the sound, and can be queried for by the user in order to control the sound during playback. Bundles: `AudioBundle` and `SpatialAudioBundle` are available to make it easy for users to play sounds. Spawn an entity with one of these bundles (or insert them to a complex entity alongside other stuff) to play a sound. Each entity represents a sound to be played. There is also a new "auto-despawn" feature (activated using `PlaybackSettings`), which, if enabled, tells Bevy to despawn entities when the sink playback finishes. This allows for "fire-and-forget" sound playback. Users can simply spawn entities whenever they want to play sounds and not have to worry about leaking memory. ## Unsolved Questions I think the current design is *fine*. I'd be happy for it to be merged. It has some possibly-surprising usability pitfalls, but I think it is still much better than the old `bevy_audio`. Here are some discussion questions for things that we could further improve. I'm undecided on these questions, which is why I didn't implement them. We should decide which of these should be addressed in this PR, and what should be left for future PRs. Or if they should be addressed at all. ### What happens when sounds start playing? Currently, the audio sink components are inserted and the bundle components are kept. Should Bevy remove the bundle components? Something else? The current design allows an entity to be reused for playing the same sound with the same parameters repeatedly. This is a niche use case I'd like to be supported, but if we have to give it up for a simpler design, I'd be fine with that. ### What happens if users remove any of the components themselves? As described above, currently, entities can be reused. Removing the audio sink causes it to be "detached" (I kept the old `Drop` impl), so the sound keeps playing. However, if the audio bundle components are not removed, Bevy will detect this entity as a "queued" sound entity again (has the bundle compoenents, without a sink component), just like before playing the sound the first time, and start playing the sound again. This behavior might be surprising? Should we do something different? ### Should mutations to `PlaybackSettings` be applied to the audio sink? We currently do not do that. `PlaybackSettings` is just for the initial settings when the sound starts playing. This is clearly documented. Do we want to keep this behavior, or do we want to allow users to use `PlaybackSettings` instead of `AudioSink`/`SpatialAudioSink` to control sounds during playback too? I think I prefer for them to be kept separate. It is not a bad mental model once you understand it, and it is documented. ### Should `AudioSink` and `SpatialAudioSink` be unified into a single component type? They provide a similar API (via the `AudioSinkPlayback` trait) and it might be annoying for users to have to deal with both of them. The unification could be done using an enum that is matched on internally by the methods. Spatial audio has extra features, so this might make it harder to access. I think we shouldn't. ### Automatic synchronization of spatial sound properties from Transforms? Should Bevy automatically apply changes to Transforms to spatial audio entities? How do we distinguish between listener and emitter? Which one does the transform represent? Where should the other one come from? Alternatively, leave this problem for now, and address it in a future PR. Or do nothing, and let users deal with it, as shown in the `spatial_audio_2d` and `spatial_audio_3d` examples. --- ## Changelog Added: - `AudioBundle`/`SpatialAudioBundle`, add them to entities to play sounds. Removed: - The `Audio` resource. - `AudioOutput` is no longer `pub`. Changed: - `AudioSink`, `SpatialAudioSink` are now components instead of assets. ## Migration Guide // TODO: write a more detailed migration guide, after the "unsolved questions" are answered and this PR is finalized. Before: ```rust /// Need to store handles somewhere #[derive(Resource)] struct MyMusic { sink: Handle<AudioSink>, } fn play_music( asset_server: Res<AssetServer>, audio: Res<Audio>, audio_sinks: Res<Assets<AudioSink>>, mut commands: Commands, ) { let weak_handle = audio.play_with_settings( asset_server.load("music.ogg"), PlaybackSettings::LOOP.with_volume(0.5), ); // upgrade to strong handle and store it commands.insert_resource(MyMusic { sink: audio_sinks.get_handle(weak_handle), }); } fn toggle_pause_music( audio_sinks: Res<Assets<AudioSink>>, mymusic: Option<Res<MyMusic>>, ) { if let Some(mymusic) = &mymusic { if let Some(sink) = audio_sinks.get(&mymusic.sink) { sink.toggle(); } } } ``` Now: ```rust /// Marker component for our music entity #[derive(Component)] struct MyMusic; fn play_music( mut commands: Commands, asset_server: Res<AssetServer>, ) { commands.spawn(( AudioBundle::from_audio_source(asset_server.load("music.ogg")) .with_settings(PlaybackSettings::LOOP.with_volume(0.5)), MyMusic, )); } fn toggle_pause_music( // `AudioSink` will be inserted by Bevy when the audio starts playing query_music: Query<&AudioSink, With<MyMusic>>, ) { if let Ok(sink) = query.get_single() { sink.toggle(); } } ```
2023-07-07 23:01:17 +00:00
#[derive(Component)]
pub struct SpatialAudioSink {
pub(crate) sink: SpatialSink,
}
impl AudioSinkPlayback for SpatialAudioSink {
fn volume(&self) -> f32 {
self.sink.volume()
}
fn set_volume(&self, volume: f32) {
self.sink.set_volume(volume);
}
fn speed(&self) -> f32 {
self.sink.speed()
}
fn set_speed(&self, speed: f32) {
self.sink.set_speed(speed);
}
fn play(&self) {
self.sink.play();
}
fn pause(&self) {
self.sink.pause();
}
fn is_paused(&self) -> bool {
self.sink.is_paused()
}
fn stop(&self) {
self.sink.stop();
}
fn empty(&self) -> bool {
self.sink.empty()
}
}
impl SpatialAudioSink {
/// Set the two ears position.
pub fn set_ears_position(&self, left_position: Vec3, right_position: Vec3) {
self.sink.set_left_ear_position(left_position.to_array());
self.sink.set_right_ear_position(right_position.to_array());
}
/// Set the listener position, with an ear on each side separated by `gap`.
pub fn set_listener_position(&self, position: Transform, gap: f32) {
self.set_ears_position(
position.translation + position.left() * gap / 2.0,
position.translation + position.right() * gap / 2.0,
);
}
/// Set the emitter position.
pub fn set_emitter_position(&self, position: Vec3) {
self.sink.set_emitter_position(position.to_array());
}
}