-
Notifications
You must be signed in to change notification settings - Fork 46
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
in_tensor_dtype use np or tf types #3
Comments
Using np or tf types For instance, what np type would you pass if your model input is TF_DOUBLE? And I don't see any type string in numpy, so here the Python type str would probably be best to use? There are also types like DT_HALF, DT_RESOURCE that are not in numpy, so I am not sure how these should be mapped. Numpy and tf data types are listed here: I don't want to use tf types (like tf.float32) because I don't want to pull TensorFlow into the client. What I like most about this client is that it can be used by anyone writing Python without having to install TensorFlow. Use float as default dtype Yes, that would be convenient. Perhaps the default type also can be set in the client's constructor? So if type is not specified in request_data's objects, check the client's default data type wich defaults to 'DT_FLOAT', but can be overridden by the developer with PredictClient(..., default_data_type='DT_FLOAT'). Feel free to do the change yourself and make a PR. I might find time during this or next week. |
Dear @stiansel , Thank you for your very quick response. I agree on your point on types, what I think it can be nice to scan the input and see if there is a numpy array and directly map it to the correct time since it may be not obvious for the user to understand where the types come from.
Totally agree.
I am a little busy in these days, I will do it in the weekend Thank you again for the package, you saved my day! All the best, Francesco Saverio |
It would be nice to use directly np or tf types to define
in_tensor_dtype
. eg:Also, what about make float the default type? So if somebody is lazy it can just not pass
in_tensor_dtype
?Btw, awesome work! You save my day :)
Cheers,
Francesco Saverio
The text was updated successfully, but these errors were encountered: