-
Notifications
You must be signed in to change notification settings - Fork 195
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Creating tensor from float decimal issue #448
Comments
Oh, it is because they are in f32 and f16 and we are computing the shortest representation for f64. We would need to implement shortest representation algorithms for both f32 and f16. |
Thanks for the feedback! This is an issue with the precision of the float itself, not a problem with Nx. https://www.h-schmidt.net/FloatConverter/IEEE754.html Here you can check that the same representation for 0.43 is reached for 32-bit floats. I'm not sure if it allows for converting to f64 ou f16. Also, if you're not familiar with floats, the following pure Elixir snippet might help shed some light into float rounding errors: iex(1)> 0.1 + 0.2
0.30000000000000004
iex(2)> 7 / 100 * 100
7.000000000000001 edit: what @josevalim said is also relevant to the issue 😅 |
To be clear, the representation is correct, just not the shortest. /cc @DianaOlympos |
Yeah i will take a look. Will take a bit of time, but we should be able to handle these |
Hello. I also got an unexpected behavior: iex> Nx.tensor(996372700.0)
#Nx.Tensor<
f32
996372672.0
>
|
@samuelmanzanera This is because Elixir represents floats as f64 by default, but Nx will infer floating-point values as f32 by default. iex(1)> <<x::32-float-native, _::8>> = <<996372700.0::32-float-native, 0::8>>
<<195, 141, 109, 78, 0>>
iex(2)> x
996372672.0 If you pass iex(3)> Nx.tensor(996372700.0, type: :f64)
#Nx.Tensor<
f64
9.963727e8
>
Note that this isn't directly relevant to this issue, which aims to have a better short-form textual representation for floating point tensors |
Ok thank you. |
For an update: I had a look. Making this work for f32 and f16 is a lot of work on the test side, and I simply have not had the time to do that. At this point, this is too big for me to do in my limited free time rn. :( sorry |
@DianaOlympos do you have any code to share? :) |
I dumped the current state here. The f64 works, it was done as a stepping stone. The f32 tests do not pass and partially this is because the tests are wrong. The tests are "stolen" and repurposed from the OTP one. That may have been a bad idea but there are so many edge cases I was not trusting myself to do it from scratch. |
@DianaOlympos I wonder if we should just brute force it: for testing we write bindings to a C lib and we compare the outputs. |
That was on my todo list to explore but they use different string formats... But maybe if we test before making it a string... And we just enumerate the 32 bits space, it is not that large |
That said i do not think there is a f16 reference implementation i would trust |
Creating a tensor from a float is causing a problem with the decimals:
The same goes if type is set to f16:
But for f64 it seems to work:
The text was updated successfully, but these errors were encountered: