-
Notifications
You must be signed in to change notification settings - Fork 412
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Train and run interference at the same time on the same machine #201
Comments
2021-03-01 13:49:58.768272: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.10.1 |
A.) Wrong issue |
Do you want to keep your interference going while those long training jobs are running? The multi-stream-multi-model-multi-GPU version of TrainYourOwnYOLO (now available here) lets you do just that. If you only have one GPU, limit the memory used by your interference streams so that Train_YOLO.py has enough GPU RAM to work with (experiment!). Training will commence at reduced speed. If you have two GPUs in your machine, move the interference jobs to the 2nd GPU (
run_on_gpu: 1
in MultiDetect.conf). Training will grab all memory on GPU #0 and run at full speed, while interference runs at full speed on GPU #1. Training doesn’t seem to be smart enough to grab GPU #1 when its available, and when GPU #0 is busy.The text was updated successfully, but these errors were encountered: