Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error deserializing float to string #211

Open
ryanleh opened this issue Apr 17, 2021 · 4 comments · May be fixed by #212
Open

Error deserializing float to string #211

ryanleh opened this issue Apr 17, 2021 · 4 comments · May be fixed by #212

Comments

@ryanleh
Copy link
Contributor

ryanleh commented Apr 17, 2021

To demonstrate the error, I can simply follow the instructions from Using Opaque SQL in the documentation but substitute the given integers for floats:

scala> df.show()
+----+------+
|word| count|
+----+------+
| foo|508.41|
| bar|717.13|
| baz| 82.31|
+----+------+


scala> df.printSchema
root
 |-- word: string (nullable = true)
 |-- count: float (nullable = false)

If I encrypt the dataframe, then decrypt it I get the following:

scala> dfEncrypted.show()
+----+----------+
|word|     count|
+----+----------+
| foo|508.410004|
| bar|717.130005|
| baz| 82.309998|
+----+----------+


scala> dfEncrypted.printSchema
root
 |-- word: string (nullable = true)
 |-- count: float (nullable = false)


scala> dfEncrypted.collect()
res18: Array[org.apache.spark.sql.Row] = Array([foo,508.41], [bar,717.13], [baz,82.31])

So it appears that there is an error in de-serializing the floats to a string as the displayed numbers are incorrect when using show() but not collect().

One thing I noticed when debugging is that, if I set breakpoints in the various cases here, running collect() shows that both the StringField and FloatField cases are entered (as expected), but running show() prints out that StringField is visited twice so it would seem that the float value is being turned into a string (incorrectly) somewhere in C++ code before Scala? But I am not sure.

@ankurdave
Copy link
Collaborator

ankurdave commented Apr 17, 2021

These strings are equivalent representations of the same underlying 32-bit floating-point value. For example, 508.41 and 508.410004 are both represented as 0x43fe347b (ref).

Therefore, I think your observation is due to a difference in behavior between the Opaque and Spark float-to-string cast expressions.

It would take significant effort to match Spark's behavior exactly. One way would be to use a library like Ryu and modify it to match Java behavior. This can be tricky for scientific notation.

@ankurdave
Copy link
Collaborator

Also, note that Spark's Dataset#show() inserts a projection that casts each output attribute to string before collecting it to the driver. This explains the difference between show() and collect():

  • For show(), the cast to string occurs on the cluster in Opaque, exposing the difference in behavior.
  • For collect(), the cast to string occurs on the driver in Java.

@ryanleh
Copy link
Contributor Author

ryanleh commented Apr 17, 2021

Thanks for such a thorough response! This behavior makes sense now though it is a shame it isn't easier to fix.

@wzheng
Copy link
Collaborator

wzheng commented Apr 18, 2021

@ankurdave thanks for the detailed response :) @ryanleh perhaps we can add this behavior to the documentation, then close the issue?

@ryanleh ryanleh linked a pull request Apr 19, 2021 that will close this issue
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants