You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have a performance issue with using your model, this repo provides the only way to using the model, through passing a text and the aspects which need to assess, and it works great, but what about if you have a huge number of samples, this library causes a bottleneck in the process, I have used it to process a lot of samples, and I couldn't exceed 15% of GPU performance, and it thems that doesn't parallelize the processing, and doesn't provide any way for doing batch processing.
Is there any way for making it faster, or doing batch processing instead of feeding samples one by one? I really appreciate your suggestions.
The text was updated successfully, but these errors were encountered:
@moroclash unfortunately not an answer, but a question from another user looking for some advice to run this model on GPU with a very large amount of data. Even though you didn't see much speedup with GPU, was it relatively easy to run the package on GPU? As far as I understand it would just involve uninstalling tensorflow and installing tensorflow-gpu instead. Or did you find any other difficulties to setting up a GPU computation?
I have a performance issue with using your model, this repo provides the only way to using the model, through passing a text and the aspects which need to assess, and it works great, but what about if you have a huge number of samples, this library causes a bottleneck in the process, I have used it to process a lot of samples, and I couldn't exceed 15% of GPU performance, and it thems that doesn't parallelize the processing, and doesn't provide any way for doing batch processing.
Is there any way for making it faster, or doing batch processing instead of feeding samples one by one? I really appreciate your suggestions.
The text was updated successfully, but these errors were encountered: