-
-
Notifications
You must be signed in to change notification settings - Fork 3.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix EaseFunction::Exponential*
to exactly hit (0, 0) and (1, 1)
#16910
Conversation
55fab60
to
7f05ee9
Compare
And add a bunch of tests to show that all the monotonic easing functions have roughly the expected shape.
7f05ee9
to
f151bf3
Compare
// These are copied from a high precision calculator; I'd rather show them | ||
// with blatantly more digits than needed (since rust will round them to the | ||
// nearest representable value anyway) rather than make it seem like the | ||
// truncated value is somehow carefully chosen. | ||
#[allow(clippy::excessive_precision)] | ||
const LOG2_1023: f32 = 9.998590429745328646459226; | ||
#[allow(clippy::excessive_precision)] | ||
const FRAC_1_1023: f32 = 0.00097751710654936461388074291; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Clippy *sigh*
Filed rust-lang/rust-clippy#13855 to not complain when defining constants like this (as things like FRAC_PI_2
and FRAC_PI_3
are).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think having the more precise number in a comment is the best approach because overly precise number literals suggest that it can indeed be represented by f32
, which is not the case. But with the comment pointing out there is a cutoff this is fine too.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As I have it would be my preference, but I'll do whatever the maintainers here think is best. If nothing else they're reproducible from the names of the constants.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Id recommend putting the representable value as the constant, and the high-precision value as a comment. Ideally, include a procedure for generating these values too, for transparency.
}; | ||
|
||
#[test] | ||
fn ease_functions_zero_to_one() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
annotation: among other things, this test ensures that, for example, exponential_in(0)
doesn't go below zero due to float rounding. (After all, the constants are calculated mathematically, so it was possible that we'd have gotten unlucky with the rounding, but we didn't.)
/// `f(t) ≈ 2.0^(10.0 * (t - 1.0))` | ||
/// | ||
/// The precise definition adjusts it slightly so it hits both `(0, 0)` and `(1, 1)`: | ||
/// `f(t) = 2.0^(10.0 * t - A) - B`, where A = log₂(2¹⁰-1) and B = 1/(2¹⁰-1). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Co-authored-by: IQuick 143 <[email protected]>
} | ||
#[inline] | ||
pub(crate) fn exponential_out(t: f32) -> f32 { | ||
1.0 - ops::powf(2.0, -10.0 * t) | ||
(FRAC_1_1023 + 1.0) - ops::exp2(-10.0 * t - (LOG2_1023 - 10.0)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was wondering if the subtraction between the log and 10 introduces some error (they're quite close values), but I think it's fine.
# Objective Almost all of the `*InOut` easing functions are not actually smooth (`SineInOut` is the one exception). Because they're defined piecewise, they jump from accelerating upwards to accelerating downwards, causing infinite jerk at t=½. ## Solution This PR adds the well-known [smoothstep](https://registry.khronos.org/OpenGL-Refpages/gl4/html/smoothstep.xhtml), as well as its higher-degree version [smootherstep](https://en.wikipedia.org/wiki/Smoothstep#Variations), as easing functions. Mathematically, these are the classic [Hermite interpolation](https://en.wikipedia.org/wiki/Hermite_interpolation) results: - for smoothstep, the cubic with velocity zero at both ends - for smootherstep, the quintic with velocity zero *and acceleration zero* at both ends And because they're simple polynomials, there's no branching and thus they don't have the acceleration jump in the middle. I also added some more information and cross-linking to the documentation for these and some of the other easing functions, to help clarify why one might want to use these over other existing ones. In particular, I suspect that if people are willing to pay for a quintic they might prefer `SmootherStep` to `QuinticInOut`. For consistency with how everything else has triples, I added `Smooth(er)Step{In,Out}` as well, in case people want to run the `In` and `Out` versions separately for some reason. Qualitatively they're not hugely different from `Quadratic{In,Out}` or `Cubic{In,Out}`, though, so could be removed if you'd rather. They're low cost to keep, though, and convenient for testing. ## Testing These are simple polynomials, so their coefficients can be read directly from the Horner's method implementation and compared to the reference materials. The tests from #16910 were updated to also test these 6 new easing functions, ensuring basic behaviour, plus one was updated to better check that the InOut versions of things match their rescaled In and Out versions. Even small changes like ```diff - (((2.5 + (-1.875 + 0.375*t) * t) * t) * t) * t + (((2.5 + (-1.85 + 0.375*t) * t) * t) * t) * t ``` are caught by multiple tests this way. If you want to confirm them visually, here are the 6 new ones graphed: <https://www.desmos.com/calculator/2d3ofujhry> ![smooth-and-smoother-step](https://github.com/user-attachments/assets/a114530e-e55f-4b6a-85e7-86e7abf51482) --- ## Migration Guide This version of bevy marks `EaseFunction` as `#[non_exhaustive]` to that future changes to add more easing functions will be non-breaking. If you were exhaustively matching that enum -- which you probably weren't -- you'll need to add a catch-all (`_ =>`) arm to cover unknown easing functions.
And add a bunch of tests to show that all the monotonic easing functions have roughly the expected shape.
Objective
The
EaseFunction::Exponential*
variants aren't actually smooth as currently implemented, because they jump by about 1‰ at the start/end/both.EaseFunction::ExponentialIn
jumps at the beginning #16676Solution
This PR slightly tweaks the shifting and scaling of all three variants to ensure they hit (0, 0) and (1, 1) exactly while gradually transitioning between them.
Graph demonstration of the new easing function definitions: https://www.desmos.com/calculator/qoc5raus2z
(Yes, they look completely identical to the previous ones at that scale. Here's a zoomed-in comparison between the old and the new if you prefer.)
The approach taken was to keep the core 2¹⁰ᵗ shape, but to ask WolframAlpha what scaling factor to use such that f(1)-f(0)=1, then shift the curve down so that goes from zero to one instead of ¹/₁₀₂₃ to ¹⁰²⁴/₁₀₂₃.
Testing
I've included in this PR a bunch of general tests for all monotonic easing functions to ensure they hit (0, 0) to (1, 1), that the InOut functions hit (½, ½), and that they have the expected convexity.
You can also see by inspection that the difference is small. The change for
exponential_in
is fromexp2(10 * t - 10)
toexp2(10 * t - 9.99859…) - 0.0009775171…
.The problem for
exponential_in(0)
is also simple to see without a calculator: 2⁻¹⁰ is obviously not zero, but with the new definitionexp2(-LOG2_1023) - FRAC_1_1023
=>1/(exp2(LOG2_1023)) - FRAC_1_1023
=>FRAC_1_1023 - FRAC_1_1023
=>0
.Migration Guide
This release of bevy slightly tweaked the definitions of
EaseFunction::ExponentialIn
,EaseFunction::ExponentialOut
, andEaseFunction::ExponentialInOut
. The previous definitions had small discontinuities, while the new ones are slightly rescaled to be continuous. For the output values that changed, that change was less than 0.001, so visually you might not even notice the difference.However, if you depended on them for determinism, you'll need to define your own curves with the previous definitions.