You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello everyone! Kudos for the work on the lib! I have a beginner question...
I'm conducting some initial tests with it but would like to know if it's possible to make inferences using the pre-trained models on a machine with multiple GPUs. In other words, if the code is run on a machine with two or more GPUs, will docTR be able to execute the task in a distributed manner to take advantage of both cards?
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
Hello everyone! Kudos for the work on the lib! I have a beginner question...
I'm conducting some initial tests with it but would like to know if it's possible to make inferences using the pre-trained models on a machine with multiple GPUs. In other words, if the code is run on a machine with two or more GPUs, will docTR be able to execute the task in a distributed manner to take advantage of both cards?
Thanks for the help!
Beta Was this translation helpful? Give feedback.
All reactions