# Objective
- `CircularSegment` and `CircularSector` are well defined 2D shapes with
both an area and a perimeter.
# Solution
- This PR implements `perimeter` for both and moves the existsing `area`
functions into the `Measured2d` implementations.
## Testing
- The `arc_tests` have been extended to also check for perimeters.
# Objective
Bevy seems to want to standardize on "American English" spellings. Not
sure if this is laid out anywhere in writing, but see also #15947.
While perusing the docs for `typos`, I noticed that it has a `locale`
config option and tried it out.
## Solution
Switch to `en-us` locale in the `typos` config and run `typos -w`
## Migration Guide
The following methods or fields have been renamed from `*dependants*` to
`*dependents*`.
- `ProcessorAssetInfo::dependants`
- `ProcessorAssetInfos::add_dependant`
- `ProcessorAssetInfos::non_existent_dependants`
- `AssetInfo::dependants_waiting_on_load`
- `AssetInfo::dependants_waiting_on_recursive_dep_load`
- `AssetInfos::loader_dependants`
- `AssetInfos::remove_dependants_and_labels`
# Objective
- Fixes#15963
## Solution
- Implement `TryFrom<Polygon<N> for ConvexPolygon<N>`
- Implement `From<ConvexPolygon<N>> for Polygon<N>`
- Remove `pub` from `vertices`
- Add `ConvexPolygon::vertices()` to get read only access to the
vertices of a convex polygon.
# Objective
Make `StableInterpolate` "just work" on tuples whose parts are each
`StableInterpolate` types. These types arise notably through
`Curve::zip` (or just through explicit mapping of a similar form). It
would otherwise be kind of frustrating to stumble upon such a thing and
then realize that, e.g., automatic resampling just doesn't work, even
though there is a very "obvious" way to do it.
## Solution
Infer `StableInterpolate` on tuples of up to size 11. I can make that
number bigger, if desired. Unfortunately, I don't think that our
standard "fake variadics" tools actually work for this; the anonymous
field accessors of tuples are `:tt` for purposes of macro expansion,
which means that you can't simplify away the identifiers by doing
something clever like using recursion (which would work if they were
`:expr`). Maybe someone who knows some incredibly dark magic could chime
in with a better solution.
The expanded impls look like this:
```rust
impl<
T0: StableInterpolate,
T1: StableInterpolate,
T2: StableInterpolate,
T3: StableInterpolate,
T4: StableInterpolate,
> StableInterpolate for (T0, T1, T2, T3, T4)
{
fn interpolate_stable(&self, other: &Self, t: f32) -> Self {
(
<T0 as StableInterpolate>::interpolate_stable(&self.0, &other.0, t),
<T1 as StableInterpolate>::interpolate_stable(&self.1, &other.1, t),
<T2 as StableInterpolate>::interpolate_stable(&self.2, &other.2, t),
<T3 as StableInterpolate>::interpolate_stable(&self.3, &other.3, t),
<T4 as StableInterpolate>::interpolate_stable(&self.4, &other.4, t),
)
}
}
```
## Testing
Expanded macros; it compiles.
## Future
Make a version of the fake variadics workflow that supports this kind of
thing.
# Objective
Improve the average user's ability to understand what the heck is going
on with the Curve API.
## Solution
I wrote some docs. I doubt these are perfect; I'm probably far too close
to this for that to be the case. :)
# Objective
The previous `PhantomData` instances were written somewhat lazily, so
they were just things like `PhantomData<T>` for curves with an output
type of `T`. This looks innocuous, but it unnecessarily constrains
`Send/Sync` inference based on `T`. See
[here](https://doc.rust-lang.org/nomicon/phantom-data.html#table-of-phantomdata-patterns).
## Solution
Switch to `PhantomData` of the form `PhantomData<fn() -> T>` for most of
these adaptors. Since they only have a functional relationship to `T`
(i.e. it shows up in the return type of trait methods), this is more
accurate.
## Testing
Tested by compiling Bevy.
Co-authored-by: François Mockers <mockersf@gmail.com>
# Objective
A bunch of code is used only if you care about the `Curve` trait. Put it
behind a feature so it can be ignored if wanted.
## Solution
Added a default feature `curve` to `bevy_math` which feature-gates the
`curve` module and internal integrations.
## Testing
Tested compiling with the feature enabled and disabled.
# Objective
- `interpolation` crates provides all the curves functions, but some of
them were wrong
- We have a partial solution where some functions comes from the
external crate, some from bevy_math
## Solution
- Move them all to bevy_math
- Remove the dependency on `interpolation`
## Testing
Playing the `easing_functions` example
![easing-functions](https://github.com/user-attachments/assets/88832f34-4bb3-4dc2-85af-7b9e4fa23e52)
# Objective
Simplify the API surrounding easing curves. Broaden the base of types
that support easing.
## Solution
There is now a single library function, `easing_curve`, which constructs
a unit-parametrized easing curve between two values based on an
`EaseFunction`:
```rust
/// Given a `start` and `end` value, create a curve parametrized over [the unit interval]
/// that connects them, using the given [ease function] to determine the form of the
/// curve in between.
///
/// [the unit interval]: Interval::UNIT
/// [ease function]: EaseFunction
pub fn easing_curve<T: Ease>(start: T, end: T, ease_fn: EaseFunction) -> EasingCurve<T> { //... }
```
As this shows, the type of the output curve is generic only in `T`. In
particular, as long as `T` is `Reflect` (and `FromReflect` etc. — i.e.,
a standard "well-behaved" reflectable type), `EasingCurve<T>` is also
`Reflect`, and there is no special field handling nonsense. Therefore,
`EasingCurve` is the kind of thing that would be able to be easily
changed in an editor. This is made possible by storing the actual
`EaseFunction` on `EasingCurve<T>` instead of indirecting through some
kind of function type (which generally leads to issues with reflection).
The types that can be eased are those that implement a trait `Ease`:
```rust
/// A type whose values can be eased between.
///
/// This requires the construction of an interpolation curve that actually extends
/// beyond the curve segment that connects two values, because an easing curve may
/// extrapolate before the starting value and after the ending value. This is
/// especially common in easing functions that mimic elastic or springlike behavior.
pub trait Ease: Sized {
/// Given `start` and `end` values, produce a curve with [unlimited domain]
/// that:
/// - takes a value equivalent to `start` at `t = 0`
/// - takes a value equivalent to `end` at `t = 1`
/// - has constant speed everywhere, including outside of `[0, 1]`
///
/// [unlimited domain]: Interval::EVERYWHERE
fn interpolating_curve_unbounded(start: &Self, end: &Self) -> impl Curve<Self>;
}
```
(I know, I know, yet *another* interpolation trait. See 'Future
direction'.)
The other existing easing functions from the previous version of this
module have also become new members of `EaseFunction`: `Linear`,
`Steps`, and `Elastic` (which maybe needs a different name). The latter
two are parametrized.
## Testing
Tested using the `easing_functions` example. I also axed the
`cubic_curve` example which was of questionable value and replaced it
with `eased_motion`, which uses this API in the context of animation:
https://github.com/user-attachments/assets/3c802992-6b9b-4b56-aeb1-a47501c29ce2
---
## Future direction
Morally speaking, `Ease` is incredibly similar to `StableInterpolate`.
Probably, we should just merge `StableInterpolate` into `Ease`, and then
make `SmoothNudge` an automatic extension trait of `Ease`. The reason I
didn't do that is that `StableInterpolate` is not implemented for
`VectorSpace` because of concerns about the `Color` types, and I wanted
to avoid controversy. I think that may be a good idea though.
As Alice mentioned before, we should also probably get rid of the
`interpolation` dependency.
The parametrized `Elastic` variant probably also needs some additional
work (e.g. renaming, in/out/in-out variants, etc.) if we want to keep
it.
# Objective
The `new` constructors for our ray types currently take a `Vec2`/`Vec3`
instead of a `Dir2`/`Dir3`. This is confusing and footgunny for several
reasons.
- Which one of these is the direction? You can't see it from the type.
```rust
let ray = Ray2d::new(Vec2::X, Vec2::X);
```
- Many engines allow unnormalized rays, and this can affect ray cast
results by scaling the time of impact. However, in Bevy, rays are
*always* normalized despite what the input argument in this case
implies, and ray cast results are *not* scaled.
```rust
// The true ray direction is still normalized, unlike what you'd expect.
let ray = Ray2d::new(Vec2::X, Vec2::new(5.0, 0.0, 0.0)));
```
These cases are what the direction types are intended for, and we should
use them as such.
## Solution
Use `Dir2`/`Dir3` in the constructors.
```rust
let ray = Ray2d::new(Vec2::X, Dir2::X);
```
We *could* also use `impl TryInto<DirN>`, which would allow both vectors
and direction types, and then panic if the input is not normalized. This
could be fine for ergonomics in some cases, but especially for rays, I
think it's better to take an explicit direction type here.
---
## Migration Guide
`Ray2d::new` and `Ray3d::new` now take a `Dir2` and `Dir3` instead of
`Vec2` and `Vec3` respectively for the ray direction.
# Objective
Several of our APIs (namely gizmos and bounding) use isometries on
current Bevy main. This is nicer than separate properties in a lot of
cases, but users have still expressed usability concerns.
One problem is that in a lot of cases, you only care about e.g.
translation, so you end up with this:
```rust
gizmos.cross_2d(
Isometry2d::from_translation(Vec2::new(-160.0, 120.0)),
12.0,
FUCHSIA,
);
```
The isometry adds quite a lot of length and verbosity, and isn't really
that relevant since only the translation is important here.
It would be nice if you could use the translation directly, and only
supply an isometry if both translation and rotation are needed. This
would make the following possible:
```rust
gizmos.cross_2d(Vec2::new(-160.0, 120.0), 12.0, FUCHSIA);
```
removing a lot of verbosity.
## Solution
Implement `From<Vec2>` and `From<Rot2>` for `Isometry2d`, and
`From<Vec3>`, `From<Vec3A>`, and `From<Quat>` for `Isometry3d`. These
are lossless conversions that fit the semantics of `From`.
This makes the proposed API possible! The methods must now simply take
an `impl Into<IsometryNd>`, and this works:
```rust
gizmos.cross_2d(Vec2::new(-160.0, 120.0), 12.0, FUCHSIA);
```
# Objective
- As discussed on
[Discord](https://discord.com/channels/691052431525675048/1203087353850364004/1285300659746246849),
implement a `ConvexPolygon` 2D math primitive and associated mesh
builder.
- The original goal was to have a mesh builder for the simplest (i.e.
convex) polygons.
## Solution
- The `ConvexPolygon` is created from its vertices.
- The convexity of the polygon is checked when created via `new()` by
verifying that the winding order of all the triangles formed with
adjacent vertices is the same.
- The `ConvexPolygonMeshBuilder` uses an anchor vertex and goes through
every adjacent pair of vertices in the polygon to form triangles that
fill up the polygon.
## Testing
- Tested locally with my own simple `ConvexPolygonMeshBuilder` usage.
# Objective
Allow curve adaptors to be reliably `Reflect` even if the curves they
hold are not `FromReflect`. This allows them, for example, to be used in
`bevy_animation`. I previously addressed this with the functional
adaptors, but I forgot to address this in the case of fields that hold
other curves and not arbitrary functions.
## Solution
Do the following on every curve adaptor that holds another curve:
```rust
// old:
#[derive(Reflect)]
```
```rust
// new:
#[derive(Reflect, FromReflect)]
#[reflect(from_reflect = false)]
```
This looks inane, but it's necessary because the default
`#[derive(Reflect)]` macro places `FromReflect` bounds on everything. To
avoid this, we opt out of deriving `FromReflect` with that macro by
adding `#[reflect(from_reflect = false)]`, then separately derive
`FromReflect`. (Of course, the latter still has the `FromReflect`
bounds, which is fine.)
# Objective
- Followup for #14788
- Support most usual ease function
## Solution
- Use the crate
[`interpolation`](https://docs.rs/interpolation/0.3.0/interpolation/trait.Ease.html)
which has them all
- it's already used by bevy_easings, bevy_tweening, be_tween,
bevy_tweening_captured, bevy_enoki, kayak_ui in the Bevy ecosystem for
various easing/tweening/interpolation
# Objective
Currently, sample-interpolated curves (such as those used by the glTF
loader for animations) do unnecessary extra work when `sample_clamped`
is called, since their implementations of `sample_unchecked` are already
clamped. Eliminating this redundant sampling is a small, easy
performance win which doesn't compromise on the animation system's
internal usage of `sample_clamped`, which guarantees that it never
samples curves out-of-bounds.
## Solution
For sample-interpolated curves, define `sample_clamped` in the way
`sample_unchecked` is currently defined, and then redirect
`sample_unchecked` to `sample_clamped`. This is arguably a more
idiomatic way of using the `cores` as well, which is nice.
## Testing
Ran `many_foxes` to make sure I didn't break anything.
# Objective
Citing @mweatherley
> There is a lot of shortfall for simple cases— e.g., we should have
library functions for making a curve connecting two points, eased
versions of that, and so on.
## Solution
This PR implements
- a simple `Easing` trait which is implemented for all `impl Curve<f32>`
types. We can't really guarantee that these curves have unit interval
domain, which some people would probably expect, but it is documented
that this isn't the case for these types and we redirect to
`EasingCurve` which is used for that purpose
- an `EasingCurve` struct, which is used to interpolate between two
values `start` and `end` using a `impl Easing` curve where the curve
will be guaranteed to be reparametrized
- a `LinearCurve` which linearly interpolates between two values `start`
and `end`
- a `CubicBezierCurve` which interpolates between `start` and `end`
values using a `CubicSegment`
- a `StepCurve` which interpolates between `start` and `end` with an
step-function with `n` steps
- an `ElasticCurve` which interpolates between `start` and `end` with
spring like behavior where the elasticity of the spring is configurable
- some `FunctionCurve` easing curves for different popular functions
including: `quadratic_ease_in`, `quadratic_ease_out`, `smoothstep`,
`identity`
## Testing
- there are a few new tests for all of these in the main module
---------
Co-authored-by: eckz <567737+eckz@users.noreply.github.com>
Co-authored-by: Miles Silberling-Cook <NthTensor@users.noreply.github.com>
Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
Co-authored-by: Matty <weatherleymatthew@gmail.com>
# Objective
This PR extends and reworks the material from #15282 by allowing
arbitrary curves to be used by the animation system to animate arbitrary
properties. The goals of this work are to:
- Allow far greater flexibility in how animations are allowed to be
defined in order to be used with `bevy_animation`.
- Delegate responsibility over keyframe interpolation to `bevy_math` and
the `Curve` libraries and reduce reliance on keyframes in animation
definitions generally.
- Move away from allowing the glTF spec to completely define animations
on a mechanical level.
## Solution
### Overview
At a high level, curves have been incorporated into the animation system
using the `AnimationCurve` trait (closely related to what was
`Keyframes`). From the top down:
1. In `animate_targets`, animations are driven by `VariableCurve`, which
is now a thin wrapper around a `Box<dyn AnimationCurve>`.
2. `AnimationCurve` is something built out of a `Curve`, and it tells
the animation system how to use the curve's output to actually mutate
component properties. The trait looks like this:
```rust
/// A low-level trait that provides control over how curves are actually applied to entities
/// by the animation system.
///
/// Typically, this will not need to be implemented manually, since it is automatically
/// implemented by [`AnimatableCurve`] and other curves used by the animation system
/// (e.g. those that animate parts of transforms or morph weights). However, this can be
/// implemented manually when `AnimatableCurve` is not sufficiently expressive.
///
/// In many respects, this behaves like a type-erased form of [`Curve`], where the output
/// type of the curve is remembered only in the components that are mutated in the
/// implementation of [`apply`].
///
/// [`apply`]: AnimationCurve::apply
pub trait AnimationCurve: Reflect + Debug + Send + Sync {
/// Returns a boxed clone of this value.
fn clone_value(&self) -> Box<dyn AnimationCurve>;
/// The range of times for which this animation is defined.
fn domain(&self) -> Interval;
/// Write the value of sampling this curve at time `t` into `transform` or `entity`,
/// as appropriate, interpolating between the existing value and the sampled value
/// using the given `weight`.
fn apply<'a>(
&self,
t: f32,
transform: Option<Mut<'a, Transform>>,
entity: EntityMutExcept<'a, (Transform, AnimationPlayer, Handle<AnimationGraph>)>,
weight: f32,
) -> Result<(), AnimationEvaluationError>;
}
```
3. The conversion process from a `Curve` to an `AnimationCurve` involves
using wrappers which communicate the intent to animate a particular
property. For example, here is `TranslationCurve`, which wraps a
`Curve<Vec3>` and uses it to animate `Transform::translation`:
```rust
/// This type allows a curve valued in `Vec3` to become an [`AnimationCurve`] that animates
/// the translation component of a transform.
pub struct TranslationCurve<C>(pub C);
```
### Animatable Properties
The `AnimatableProperty` trait survives in the transition, and it can be
used to allow curves to animate arbitrary component properties. The
updated documentation for `AnimatableProperty` explains this process:
<details>
<summary>Expand AnimatableProperty example</summary
An `AnimatableProperty` is a value on a component that Bevy can animate.
You can implement this trait on a unit struct in order to support
animating
custom components other than transforms and morph weights. Use that type
in
conjunction with `AnimatableCurve` (and perhaps
`AnimatableKeyframeCurve`
to define the animation itself). For example, in order to animate font
size of a
text section from 24 pt. to 80 pt., you might use:
```rust
#[derive(Reflect)]
struct FontSizeProperty;
impl AnimatableProperty for FontSizeProperty {
type Component = Text;
type Property = f32;
fn get_mut(component: &mut Self::Component) -> Option<&mut Self::Property> {
Some(&mut component.sections.get_mut(0)?.style.font_size)
}
}
```
You can then create an `AnimationClip` to animate this property like so:
```rust
let mut animation_clip = AnimationClip::default();
animation_clip.add_curve_to_target(
animation_target_id,
AnimatableKeyframeCurve::new(
[
(0.0, 24.0),
(1.0, 80.0),
]
)
.map(AnimatableCurve::<FontSizeProperty, _>::from_curve)
.expect("Failed to create font size curve")
);
```
Here, the use of `AnimatableKeyframeCurve` creates a curve out of the
given keyframe time-value
pairs, using the `Animatable` implementation of `f32` to interpolate
between them. The
invocation of `AnimatableCurve::from_curve` with `FontSizeProperty`
indicates that the `f32`
output from that curve is to be used to animate the font size of a
`Text` component (as
configured above).
</details>
### glTF Loading
glTF animations are now loaded into `Curve` types of various kinds,
depending on what is being animated and what interpolation mode is being
used. Those types get wrapped into and converted into `Box<dyn
AnimationCurve>` and shoved inside of a `VariableCurve` just like
everybody else.
### Morph Weights
There is an `IterableCurve` abstraction which allows sampling these from
a contiguous buffer without allocating. Its only reason for existing is
that Rust disallows you from naming function types, otherwise we would
just use `Curve` with an iterator output type. (The iterator involves
`Map`, and the name of the function type would have to be able to be
named, but it is not.)
A `WeightsCurve` adaptor turns an `IterableCurve` into an
`AnimationCurve`, so it behaves like everything else in that regard.
## Testing
Tested by running existing animation examples. Interpolation logic has
had additional tests added within the `Curve` API to replace the tests
in `bevy_animation`. Some kinds of out-of-bounds errors have become
impossible.
Performance testing on `many_foxes` (`animate_targets`) suggests that
performance is very similar to the existing implementation. Here are a
couple trace histograms across different runs (yellow is this branch,
red is main).
<img width="669" alt="Screenshot 2024-09-27 at 9 41 50 AM"
src="https://github.com/user-attachments/assets/5ba4e9ac-3aea-452e-aaf8-1492acc2d7fc">
<img width="673" alt="Screenshot 2024-09-27 at 9 45 18 AM"
src="https://github.com/user-attachments/assets/8982538b-04cf-46b5-97b2-164c6bc8162e">
---
## Migration Guide
Most user code that does not directly deal with `AnimationClip` and
`VariableCurve` will not need to be changed. On the other hand,
`VariableCurve` has been completely overhauled. If you were previously
defining animation curves in code using keyframes, you will need to
migrate that code to use curve constructors instead. For example, a
rotation animation defined using keyframes and added to an animation
clip like this:
```rust
animation_clip.add_curve_to_target(
animation_target_id,
VariableCurve {
keyframe_timestamps: vec![0.0, 1.0, 2.0, 3.0, 4.0],
keyframes: Keyframes::Rotation(vec![
Quat::IDENTITY,
Quat::from_axis_angle(Vec3::Y, PI / 2.),
Quat::from_axis_angle(Vec3::Y, PI / 2. * 2.),
Quat::from_axis_angle(Vec3::Y, PI / 2. * 3.),
Quat::IDENTITY,
]),
interpolation: Interpolation::Linear,
},
);
```
would now be added like this:
```rust
animation_clip.add_curve_to_target(
animation_target_id,
AnimatableKeyframeCurve::new([0.0, 1.0, 2.0, 3.0, 4.0].into_iter().zip([
Quat::IDENTITY,
Quat::from_axis_angle(Vec3::Y, PI / 2.),
Quat::from_axis_angle(Vec3::Y, PI / 2. * 2.),
Quat::from_axis_angle(Vec3::Y, PI / 2. * 3.),
Quat::IDENTITY,
]))
.map(RotationCurve)
.expect("Failed to build rotation curve"),
);
```
Note that the interface of `AnimationClip::add_curve_to_target` has also
changed (as this example shows, if subtly), and now takes its curve
input as an `impl AnimationCurve`. If you need to add a `VariableCurve`
directly, a new method `add_variable_curve_to_target` accommodates that
(and serves as a one-to-one migration in this regard).
### For reviewers
The diff is pretty big, and the structure of some of the changes might
not be super-obvious:
- `keyframes.rs` became `animation_curves.rs`, and `AnimationCurve` is
based heavily on `Keyframes`, with the adaptors also largely following
suite.
- The Curve API adaptor structs were moved from `bevy_math::curve::mod`
into their own module `adaptors`. There are no functional changes to how
these adaptors work; this is just to make room for the specialized
reflection implementations since `mod.rs` was getting kind of cramped.
- The new module `gltf_curves` holds the additional curve constructions
that are needed by the glTF loader. Note that the loader uses a mix of
these and off-the-shelf `bevy_math` curve stuff.
- `animatable.rs` no longer holds logic related to keyframe
interpolation, which is now delegated to the existing abstractions in
`bevy_math::curve::cores`.
---------
Co-authored-by: Gino Valente <49806985+MrGVSV@users.noreply.github.com>
Co-authored-by: aecsocket <43144841+aecsocket@users.noreply.github.com>
# Objective
Unlike `Capsule3d` which has the `.to_cylinder` method, `Capsule2d`
doesn't have an equivalent `.to_inner_rectangle` method and as shown by
#15191 this is surprisingly easy to get wrong
## Solution
Implemented a `Capsule2d::to_inner_rectangle` method as it is
implemented in the fixed `Capsule2d` shape sampling, and as I was adding
tests I noticed `Capsule2d` didn't implement `Measure2d` so I did this
as well.
## Changelog
### Added
- `Capsule2d::to_inner_rectangle`, `Capsule2d::area` and
`Capsule2d::perimeter`
---------
Co-authored-by: Joona Aalto <jondolf.dev@gmail.com>
Co-authored-by: James Liu <contact@jamessliu.com>
Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
(Note: #15434 implements something very similar to this for functional
curve adaptors, which is why they aren't present in this PR.)
# Objective
Previously, there was basically no chance that the
explicitly-interpolating sample curve structs from the `Curve` API would
actually be `Reflect`. The reason for this is functional programming:
the structs contain an explicit interpolation `I: Fn(&T, &T, f32) -> T`
which, under typical circumstances, will never be `Reflect`, which
prevents the derive from realistically succeeding. In fact, they won't
be a lot of other things either, notably including both`Debug` and
`TypePath`, which are also required for reflection to succeed.
The goal of this PR is to weaken the implementations of reflection
traits for these structs so that they can implement `Reflect` under
reasonable circumstances. (Notably, they will still not be
`FromReflect`, which is unavoidable.)
## Solution
The function fields are marked as `#[reflect(ignore)]`, and the derive
macro for `Reflect` has `FromReflect` disabled. (This is not fully
optimal, but we don't presently have any kind of "read-only" attribute
for these fields.) Additionally, these structs receive custom `Debug`
and `TypePath` implementations that display the function's (unstable!)
type name instead of its value or type path (respectively). In the case
of `TypePath`, this is a bit janky, but the instability of `type_name`
won't generally present an issue for generics, which would have to be
registered manually in the type registry anyway, which is impossible
because the function type parameters cannot be named.
(And in general, the "blessed" route for such cases would generally
involve manually monomorphizing the function parameter away, which also
allows access to `FromReflect` etc. through very ordinary use of the
derive macro.)
## Testing
Tests in the new `bevy_math::curve::sample_curves` module guarantee that
these are actually `Reflect` under reasonable circumstances.
---
## Future changes
If and when function item types become `Default`, these types will need
to receive custom `FromReflect` implementations that exploit it. Such a
custom implementation would also be desirable if users start doing
things like wrapping function items in `Default`/`FromReflect` wrappers
that still implement a `Fn` trait.
Additionally, if function types become nameable in user-space, the
stance on `Debug`/`TypePath` may bear reexamination, since partial
monomorphization through wrappers would make implementing reflect traits
for function types potentially more viable.
---------
Co-authored-by: Gino Valente <49806985+MrGVSV@users.noreply.github.com>
# Objective
We introduced the fancy Curve API earlier in this version. The goal of
this PR is to provide a level of integration between that API and the
existing spline constructions in `bevy_math`.
Note that this PR only covers the integration of position-sampling via
the `Curve` API. Other (substantially more complex) planned work will
introduce general facilities for handling derivatives.
## Solution
`CubicSegment`, `CubicCurve`, `RationalSegment`, and `RationalCurve` all
now implement `Curve`, using their `position` function to sample the
output.
Additionally, some documentation has been updated/corrected, and
`Serialize`/`Deserialize` derives have been added for all the curve
structs. (Note that there are some barriers to automatic registration of
`ReflectSerialize`/`ReflectSerialize` involving generics that have not
been resolved in this PR.)
---
## Migration Guide
The `RationalCurve::domain` method has been renamed to
`RationalCurve::length`. Calling `.domain()` on a `RationalCurve` now
returns its entire domain as an `Interval`.
# Objective
This implements another item on the way to complete the `Curves`
implementation initiative
Citing @mweatherley
> Curve adaptors for making a curve repeat or ping-pong would be useful.
This adds three widely applicable adaptors:
- `ReverseCurve` "plays" the curve backwards
- `RepeatCurve` to repeat the curve for `n` times where `n` in `[0,inf)`
- `ForeverCurve` which extends the curves domain to `EVERYWHERE`
- `PingPongCurve` (name wip (?)) to chain the curve with it's reverse.
This would be achievable with `ReverseCurve` and `ChainCurve`, but it
would require the use of `by_ref` which can be restrictive in some
scenarios where you'd rather just consume the curve. Users can still
create the same effect by combination of the former two, but since this
will be most likely a very typical adaptor we should also provide it on
the library level. (Why it's typical: you can create a single period of
common waves with it pretty easily, think square wave (= pingpong +
step), triangle wave ( = pingpong + linear), etc.)
- `ContinuationCurve` which chains two curves but also makes sure that
the samples of the second curve are translated so that
`sample(first.end) == sample(second.start)`
## Solution
Implement the adaptors above. (More suggestions are welcome!)
## Testing
- [x] add simple tests. One per adaptor
---------
Co-authored-by: eckz <567737+eckz@users.noreply.github.com>
Co-authored-by: Matty <2975848+mweatherley@users.noreply.github.com>
Co-authored-by: IQuick 143 <IQuick143cz@gmail.com>
Co-authored-by: Matty <weatherleymatthew@gmail.com>
Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
# Objective
- Fixes#6370
- Closes#6581
## Solution
- Added the following lints to the workspace:
- `std_instead_of_core`
- `std_instead_of_alloc`
- `alloc_instead_of_core`
- Used `cargo +nightly fmt` with [item level use
formatting](https://rust-lang.github.io/rustfmt/?version=v1.6.0&search=#Item%5C%3A)
to split all `use` statements into single items.
- Used `cargo clippy --workspace --all-targets --all-features --fix
--allow-dirty` to _attempt_ to resolve the new linting issues, and
intervened where the lint was unable to resolve the issue automatically
(usually due to needing an `extern crate alloc;` statement in a crate
root).
- Manually removed certain uses of `std` where negative feature gating
prevented `--all-features` from finding the offending uses.
- Used `cargo +nightly fmt` with [crate level use
formatting](https://rust-lang.github.io/rustfmt/?version=v1.6.0&search=#Crate%5C%3A)
to re-merge all `use` statements matching Bevy's previous styling.
- Manually fixed cases where the `fmt` tool could not re-merge `use`
statements due to conditional compilation attributes.
## Testing
- Ran CI locally
## Migration Guide
The MSRV is now 1.81. Please update to this version or higher.
## Notes
- This is a _massive_ change to try and push through, which is why I've
outlined the semi-automatic steps I used to create this PR, in case this
fails and someone else tries again in the future.
- Making this change has no impact on user code, but does mean Bevy
contributors will be warned to use `core` and `alloc` instead of `std`
where possible.
- This lint is a critical first step towards investigating `no_std`
options for Bevy.
---------
Co-authored-by: François Mockers <francois.mockers@vleue.com>
# Objective
Updating ``glam`` to 0.29, ``encase`` to 0.10.
## Solution
Update the necessary ``Cargo.toml`` files.
## Testing
Ran ``cargo run -p ci`` on Windows; no issues came up.
---------
Co-authored-by: aecsocket <aecsocket@tutanota.com>
# Objective
- Fixes#15236
## Solution
- Use bevy_math::ops instead of std floating point operations.
## Testing
- Did you test these changes? If so, how?
Unit tests and `cargo run -p ci -- test`
- How can other people (reviewers) test your changes? Is there anything
specific they need to know?
Execute `cargo run -p ci -- test` on Windows.
- If relevant, what platforms did you test these changes on, and are
there any important ones you can't test?
Windows
## Migration Guide
- Not a breaking change
- Projects should use bevy math where applicable
---------
Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
Co-authored-by: IQuick 143 <IQuick143cz@gmail.com>
Co-authored-by: Joona Aalto <jondolf.dev@gmail.com>
# Objective
- Another way of specifying rotations was requested in
https://github.com/bevyengine/bevy/issues/11132#issuecomment-2344603178
## Solution
- Add methods on `Rot2`
- `turn_fraction(fraction: f32) -> Self`
- `as_turn_fraction(self) -> f32`
- Also add some documentation on range of rotation
## Testing
- extended existing tests
- added new tests
## Showcase
```rust
let rotation1 = Rot2::degrees(90.0);
let rotation2 = Rot2::turn_fraction(0.25);
// rotations should be equal
assert_relative_eq!(rotation1, rotation2);
// The rotation should be 90 degrees
assert_relative_eq!(rotation2.as_radians(), FRAC_PI_2);
assert_relative_eq!(rotation2.as_degrees(), 90.0);
```
---------
Co-authored-by: Joona Aalto <jondolf.dev@gmail.com>
Co-authored-by: Jan Hohenheim <jan@hohenheim.ch>
# Objective
`Capsule2d::sample_interior` uses the radius of the capsule for the
width of its rectangular section. It should be using two times the
radius for the full width!
I noticed this as I was getting incorrect results for angular inertia
approximated from a point cloud of points sampled on the capsule. This
hinted that something was wrong with the sampling.
## Solution
Multiply the radius by two to get the full width of the rectangular
section. With this, the sampling produces the correct result in my
tests.
Hello,
I'd like to contribute to this project by adding some useful constants
and improving the documentation for the AspectRatio struct. Here's a
summary of the changes I've made:
1. Added new constants for common aspect ratios:
- SIXTEEN_NINE (16:9)
- FOUR_THREE (4:3)
- ULTRAWIDE (21:9)
2. Enhanced the overall documentation:
- Improved module-level documentation with an overview and use cases
- Expanded explanation of the AspectRatio struct with examples
- Added detailed descriptions and examples for all methods (both
existing and new)
- Included explanations for the newly introduced constant values
- Added clarifications for From trait implementations
These changes aim to make the AspectRatio API more user-friendly and
easier to understand. The new constants provide convenient access to
commonly used aspect ratios, which I believe will be helpful in many
scenarios.
---------
Co-authored-by: Gonçalo Rica Pais da Silva <bluefinger@gmail.com>
Co-authored-by: Lixou <82600264+DasLixou@users.noreply.github.com>
# Objective
- Crate-level prelude modules, such as `bevy_ecs::prelude`, are plagued
with inconsistency! Let's fix it!
## Solution
Format all preludes based on the following rules:
1. All preludes should have brief documentation in the format of:
> The _name_ prelude.
>
> This includes the most common types in this crate, re-exported for
your convenience.
2. All documentation should be outer, not inner. (`///` instead of
`//!`.)
3. No prelude modules should be annotated with `#[doc(hidden)]`. (Items
within them may, though I'm not sure why this was done.)
## Testing
- I manually searched for the term `mod prelude` and updated all
occurrences by hand. 🫠
---------
Co-authored-by: Gino Valente <49806985+MrGVSV@users.noreply.github.com>
# Objective
- Add gizmos integration for the new `Curve` things in the math lib
## Solution
- Add the following methods
- `curve_2d(curve, sample_times, color)`
- `curve_3d(curve, sample_times, color)`
- `curve_gradient_2d(curve, sample_times_with_colors)`
- `curve_gradient_3d(curve, sample_times_with_colors)`
## Testing
- I added examples of the 2D and 3D variants of the gradient curve
gizmos to the gizmos examples.
## Showcase
### 2D
![image](https://github.com/user-attachments/assets/01a75706-a7b4-4fc5-98d5-18018185c877)
```rust
let domain = Interval::EVERYWHERE;
let curve = function_curve(domain, |t| Vec2::new(t, (t / 25.0).sin() * 100.0));
let resolution = ((time.elapsed_seconds().sin() + 1.0) * 50.0) as usize;
let times_and_colors = (0..=resolution)
.map(|n| n as f32 / resolution as f32)
.map(|t| (t - 0.5) * 600.0)
.map(|t| (t, TEAL.mix(&HOT_PINK, (t + 300.0) / 600.0)));
gizmos.curve_gradient_2d(curve, times_and_colors);
```
### 3D
![image](https://github.com/user-attachments/assets/3fd23983-1ec9-46cd-baed-5b5e2dc935d0)
```rust
let domain = Interval::EVERYWHERE;
let curve = function_curve(domain, |t| {
(Vec2::from((t * 10.0).sin_cos())).extend(t - 6.0)
});
let resolution = ((time.elapsed_seconds().sin() + 1.0) * 100.0) as usize;
let times_and_colors = (0..=resolution)
.map(|n| n as f32 / resolution as f32)
.map(|t| t * 5.0)
.map(|t| (t, TEAL.mix(&HOT_PINK, t / 5.0)));
gizmos.curve_gradient_3d(curve, times_and_colors);
```
# Objective
- `Curve<T>` was meant to be object safe, but one of the latest commits
made it not object safe.
- When trying to use `Curve<T>` as `&dyn Curve<T>` this compile error is
raised:
```
error[E0038]: the trait `curve::Curve` cannot be made into an object
--> crates/bevy_math/src/curve/mod.rs:1025:20
note: for a trait to be "object safe" it needs to allow building a vtable to allow the call to be resolvable dynamically; for more information visit <https://doc.rust-lang.org/reference/items/traits.html#object-safety>
--> crates/bevy_math/src/curve/mod.rs:60:8
|
23 | pub trait Curve<T> {
| ----- this trait cannot be made into an object...
...
60 | fn sample_iter(&self, iter: impl IntoIterator<Item = f32>) -> impl Iterator<Item = Option<T>> {
| ^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...because method `sample_iter` references an `impl Trait` type in its return type
| |
| ...because method `sample_iter` has generic type parameters
...
```
## Solution
- Making `Curve<T>` object safe again by adding `Self: Sized` to newly
added methods.
## Testing
- Added new test that ensures the `Curve<T>` trait can be made into an
objet.
# Objective
This is a value that is and will be used as a domain of curves pretty
often. By adding it as a dedicated constant we can get rid of some
`unwraps` and function calls.
## Solution
added `Interval::UNIT`
## Testing
I replaced all occurrences of `interval(0.0, 1.0).unwrap()` with the new
`Interval::UNIT` constant in tests and doc tests.
# Objective
Citing @mweatherley
> As mentioned before, a multi-sampling function in the API which takes
an iterator is probably something we want (e.g. `sample_iter(iter: impl
IntoIterator<Item = f32>) -> impl IntoIterator<Item = T> { //... }`, but
there are some design choices to be made on the details (e.g. does this
filter out points that aren't in the domain? does it do sorting? etc.)
## Solution
I think the most flexible solution for end users is to expose all the
`sample_...` functions with an `iter` equivalent, so we'll have
- `sample_iter`
- `sample_iter_unchecked`
- `sample_iter_clamped`
Answering some questions from the original idea:
> does this filter out points that aren't in the domain?
With the methods the user has the choice to just sample or if they want
to filter out invalid types us `sample_iter` and then apply `filter_map`
to the iterator returned themselves.
> does it do sorting?
I think it's the same thing. If the user wants it, they need to do it
themselves by either collecting and sorting a `Vec` or using
`itertools`. I think there is a legit use case for "please sample me
this collection of points that are unordered" and we would destroy it if
we take away to much agency from users by sorting for them
## Testing
- Added a test which covers all three methods
# Objective
`arc_2d` wasn't actually doing what the docs were saying. The arc wasn't
offset by what was previously `direction_angle` but by `direction_angle
- arc_angle / 2.0`. This meant that the arcs center was laying on the
`Vec2::Y` axis and then it was offset. This was probably done to fit the
behavior of the `Arc2D` primitive. I would argue that this isn't
desirable for the plain `arc_2d` gizmo method since
- a) the docs get longer to explain the weird centering
- b) the mental model the user has to know gets bigger with more
implicit assumptions
given the code
```rust
my_gizmos.arc_2d(Vec2::ZERO, 0.0, FRAC_PI_2, 75.0, ORANGE_RED);
```
we get
![image](https://github.com/user-attachments/assets/84894c6d-42e4-451b-b3e2-811266486ede)
where after the fix with
```rust
my_gizmos.arc_2d(Isometry2d::IDENTITY, FRAC_PI_2, 75.0, ORANGE_RED);
```
we get
![image](https://github.com/user-attachments/assets/16b0aba0-f7b5-4600-ac49-a22be0315c40)
To get the same result with the previous implementation you would have
to randomly add `arc_angle / 2.0` to the `direction_angle`.
```rust
my_gizmos.arc_2d(Vec2::ZERO, FRAC_PI_4, FRAC_PI_2, 75.0, ORANGE_RED);
```
This makes constructing similar helping functions as they already exist
in 3D like
- `long_arc_2d_between`
- `short_arc_2d_between`
much harder.
## Solution
- Make the arc really start at `Vec2::Y * radius` in counter-clockwise
direction + offset by an angle as the docs state it
- Use `Isometry2d` instead of `position : Vec2` and `direction_angle :
f32` to reduce the chance of messing up rotation/translation
- Adjust the docs for the changes above
- Adjust the gizmo rendering of some primitives
## Testing
- check `2d_gizmos.rs` and `render_primitives.rs` examples
## Migration Guide
- users have to adjust their usages of `arc_2d`:
- before:
```rust
arc_2d(
pos,
angle,
arc_angle,
radius,
color
)
```
- after:
```rust
arc_2d(
// this `+ arc_angle * 0.5` quirk is only if you want to preserve the
previous behavior
// with the new API.
// feel free to try to fix this though since your current calls to this
function most likely
// involve some computations to counter-act that quirk in the first
place
Isometry2d::new(pos, Rot2::radians(angle + arc_angle * 0.5),
arc_angle,
radius,
color
)
```
# Objective
- Fixes#14796
## Solution
- Copy docs for wrapper methods, make sure they are consistent with the
original docs except for the section on precision.
# Objective
Fixes#14782
## Solution
Enable the lint and fix all upcoming hints (`--fix`). Also tried to
figure out the false-positive (see review comment). Maybe split this PR
up into multiple parts where only the last one enables the lint, so some
can already be merged resulting in less many files touched / less
potential for merge conflicts?
Currently, there are some cases where it might be easier to read the
code with the qualifier, so perhaps remove the import of it and adapt
its cases? In the current stage it's just a plain adoption of the
suggestions in order to have a base to discuss.
## Testing
`cargo clippy` and `cargo run -p ci` are happy.
# Objective
Some algorithms don't really work well or are not efficient in 3D space.
When we know we have points on an `InfinitePlane3d` it would be helpful
to have some utility methods to reversibly transform points on the plane
to 2D space to apply some algorithms there.
## Solution
This PR adds a few of methods to project 3D points on a plane to 2D
points and inject them back. Additionally there are some other small
common helper methods.
## Testing
- added some tests that cover the new methods
---------
Co-authored-by: Matty <weatherleymatthew@gmail.com>
# Objective
Finish what we started in #14630. The Curve RFC is
[here](https://github.com/bevyengine/rfcs/blob/main/rfcs/80-curve-trait.md).
## Solution
This contains the rest of the library from my branch. The main things
added here are:
- Bulk sampling / resampling methods on `Curve` itself
- Data structures supporting the above
- The `cores` submodule that those data structures use to encapsulate
sample interpolation
The weirdest thing in here is probably `ChunkedUnevenCore` in `cores`,
which is not used by anything in the Curve library itself but which is
required for efficient storage of glTF animation curves. (See #13105.)
We can move it into a different PR if we want to; I don't have strong
feelings either way.
## Testing
New tests related to resampling are included. As I write this, I realize
we could use some tests in `cores` itself, so I will add some on this
branch before too long.
---------
Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
Co-authored-by: Robert Walter <26892280+RobWalt@users.noreply.github.com>
# Objective
Closes#14474
Previously, the `libm` feature of bevy_math would just pass the same
feature flag down to glam. However, bevy_math itself had many uses of
floating-point arithmetic with unspecified precision. For example,
`f32::sin_cos` and `f32::powi` have unspecified precision, which means
that the exact details of their output are not guaranteed to be stable
across different systems and/or versions of Rust. This means that users
of bevy_math could observe slightly different behavior on different
systems if these methods were used.
The goal of this PR is to make it so that the `libm` feature flag
actually guarantees some degree of determinacy within bevy_math itself
by switching to the libm versions of these functions when the `libm`
feature is enabled.
## Solution
bevy_math now has an internal module `bevy_math::ops`, which re-exports
either the standard versions of the operations or the libm versions
depending on whether the `libm` feature is enabled. For example,
`ops::sin` compiles to `f32::sin` without the `libm` feature and to
`libm::sinf` with it.
This approach has a small shortfall, which is that `f32::powi` (integer
powers of floating point numbers) does not have an equivalent in `libm`.
On the other hand, this method is only used for squaring and cubing
numbers in bevy_math. Accordingly, this deficit is covered by the
introduction of a trait `ops::FloatPow`:
```rust
pub(crate) trait FloatPow {
fn squared(self) -> Self;
fn cubed(self) -> Self;
}
```
Next, each current usage of the unspecified-precision methods has been
replaced by its equivalent in `ops`, so that when `libm` is enabled, the
libm version is used instead. The exception, of course, is that
`.powi(2)`/`.powi(3)` have been replaced with `.squared()`/`.cubed()`.
Finally, the usage of the plain `f32` methods with unspecified precision
is now linted out of bevy_math (and hence disallowed in CI). For
example, using `f32::sin` within bevy_math produces a warning that tells
the user to use the `ops::sin` version instead.
## Testing
Ran existing tests. It would be nice to check some benchmarks on NURBS
things once #14677 merges. I'm happy to wait until then if the rest of
this PR is fine.
---
## Discussion
In the future, it might make sense to actually expose `bevy_math::ops`
as public if any downstream Bevy crates want to provide similar
determinacy guarantees. For now, it's all just `pub(crate)`.
This PR also only covers `f32`. If we find ourselves using `f64`
internally in parts of bevy_math for better robustness, we could extend
the module and lints to cover the `f64` versions easily enough.
I don't know how feasible it is, but it would also be nice if we could
standardize the bevy_math tests with the `libm` feature in CI, since
their success is currently platform-dependent (e.g. 8 of them fail on my
machine when run locally).
---------
Co-authored-by: IQuick 143 <IQuick143cz@gmail.com>
# Objective
This PR implements part of the [Curve
RFC](https://github.com/bevyengine/rfcs/blob/main/rfcs/80-curve-trait.md).
See that document for motivation, objectives, etc.
## Solution
For purposes of reviewability, this PR excludes the entire part of the
RFC related to taking multiple samples, resampling, and interpolation
generally. (This means the entire `cores` submodule is also excluded.)
On the other hand, the entire `Interval` type and all of the functional
`Curve` adaptors are included.
## Testing
Test modules are included and can be run locally (but they are also
included in CI).
---------
Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
Basically it's https://github.com/bevyengine/bevy/pull/13792 with the
bumped versions of `encase` and `hexasphere`.
---------
Co-authored-by: Robert Swain <robert.swain@gmail.com>
Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
# Objective
Bevy's direction types have `new` and `new_unchecked` constructors, but
no unchecked variant for the `Dir2::from_xy` and `Dir3::from_xyz`
methods.
For me, this has several times lead to constructing directions like
this, in cases where the components of the direction are already known
to be normalized:
```rust
let normal = Dir2::new_unchecked(Vec2::new(-ray.direction.x.signum(), 0.0));
```
```rust
segment.direction =
Dir2::new_unchecked(Vec2::new(-segment.direction.x, segment.direction.y));
```
For consistency and ergonomics, it would be nice to have unchecked
variants of `Dir2::from_xy` and `Dir3::from_xyz`:
```rust
let normal = Dir2::from_xy_unchecked(-ray.direction.x.signum(), 0.0);
```
```rust
segment.direction = Dir2::from_xy_unchecked(-segment.direction.x, segment.direction.y);
```
## Solution
Add `Dir2::from_xy_unchecked` and `Dir3::from_xyz_unchecked`.
# Objective
Previously, this area of bevy_math used raw translation and rotations to
encode isometries, which did not exist earlier. The goal of this PR is
to make the codebase of bevy_math more harmonious by using actual
isometries (`Isometry2d`/`Isometry3d`) in these places instead — this
will hopefully make the interfaces more digestible for end-users, in
addition to facilitating conversions.
For instance, together with the addition of #14478, this means that a
bounding box for a collider with an isometric `Transform` can be
computed as
```rust
collider.aabb_3d(collider_transform.to_isometry())
```
instead of using manual destructuring.
## Solution
- The traits `Bounded2d` and `Bounded3d` now use `Isometry2d` and
`Isometry3d` (respectively) instead of `translation` and `rotation`
parameters; e.g.:
```rust
/// A trait with methods that return 3D bounding volumes for a shape.
pub trait Bounded3d {
/// Get an axis-aligned bounding box for the shape translated and
rotated by the given isometry.
fn aabb_3d(&self, isometry: Isometry3d) -> Aabb3d;
/// Get a bounding sphere for the shape translated and rotated by the
given isometry.
fn bounding_sphere(&self, isometry: Isometry3d) -> BoundingSphere;
}
```
- Similarly, the `from_point_cloud` constructors for axis-aligned
bounding boxes and bounding circles/spheres now take isometries instead
of separate `translation` and `rotation`; e.g.:
```rust
/// Computes the smallest [`Aabb3d`] containing the given set of points,
/// transformed by the rotation and translation of the given isometry.
///
/// # Panics
///
/// Panics if the given set of points is empty.
#[inline(always)]
pub fn from_point_cloud(
isometry: Isometry3d,
points: impl Iterator<Item = impl Into<Vec3A>>,
) -> Aabb3d { //... }
```
This has a couple additional results:
1. The end-user no longer interacts directly with `Into<Vec3A>` or
`Into<Rot2>` parameters; these conversions all happen earlier now,
inside the isometry types.
2. Similarly, almost all intermediate `Vec3 -> Vec3A` conversions have
been eliminated from the `Bounded3d` implementations for primitives.
This probably has some performance benefit, but I have not measured it
as of now.
## Testing
Existing unit tests help ensure that nothing has been broken in the
refactor.
---
## Migration Guide
The `Bounded2d` and `Bounded3d` traits now take `Isometry2d` and
`Isometry3d` parameters (respectively) instead of separate translation
and rotation arguments. Existing calls to `aabb_2d`, `bounding_circle`,
`aabb_3d`, and `bounding_sphere` will have to be changed to use
isometries instead. A straightforward conversion is to refactor just by
calling `Isometry2d/3d::new`, as follows:
```rust
// Old:
let aabb = my_shape.aabb_2d(my_translation, my_rotation);
// New:
let aabb = my_shape.aabb_2d(Isometry2d::new(my_translation, my_rotation));
```
However, if the old translation and rotation are 3d
translation/rotations originating from a `Transform` or
`GlobalTransform`, then `to_isometry` may be used instead. For example:
```rust
// Old:
let bounding_sphere = my_shape.bounding_sphere(shape_transform.translation, shape_transform.rotation);
// New:
let bounding_sphere = my_shape.bounding_sphere(shape_transform.to_isometry());
```
This discussion also applies to the `from_point_cloud` construction
method of `Aabb2d`/`BoundingCircle`/`Aabb3d`/`BoundingSphere`, which has
similarly been altered to use isometries.