You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Would you provide a guideline on how to test the model with new images from a camera and calibration matrices acquired from opencv?
We are using webcam images and replaced the calibration matrices, however we do not get proper results.
Since we are using single camera, i believe for testing, the model should not require camera extrinsic parameters. (therefore it is confusing, however we assumed any extrinsic parameters should not affect the performance- Though not sure if that is the case).
-A general guideline would be helpful on how to test the pretrained model on a custom RGB camera. What hyper-parameters should be considered to be changed for the system to run properly? what hyper parameters should be tweaked with respect to the camera intrinsic parameters to have better performance?
-Another question would be, is there any work around to test the network on an image without having the intrinsic parameters of the camera? Let's say we have a youtube video, and all we have is the resolution of the image? Do you think the detector is capable of estimating the depth of the objects up to a scale?( meaning the relative positioning of objects in 3D world is estimated up to a scale and then using a reference(such as size of the vehicles) we can rectify the distances.)
Would you provide a guideline on how to test the model with new images from a camera and calibration matrices acquired from opencv?
We are using webcam images and replaced the calibration matrices, however we do not get proper results.
Since we are using single camera, i believe for testing, the model should not require camera extrinsic parameters. (therefore it is confusing, however we assumed any extrinsic parameters should not affect the performance- Though not sure if that is the case).
-A general guideline would be helpful on how to test the pretrained model on a custom RGB camera. What hyper-parameters should be considered to be changed for the system to run properly? what hyper parameters should be tweaked with respect to the camera intrinsic parameters to have better performance?
-Another question would be, is there any work around to test the network on an image without having the intrinsic parameters of the camera? Let's say we have a youtube video, and all we have is the resolution of the image? Do you think the detector is capable of estimating the depth of the objects up to a scale?( meaning the relative positioning of objects in 3D world is estimated up to a scale and then using a reference(such as size of the vehicles) we can rectify the distances.)
Redacted from openPCDET question: open-mmlab/OpenPCDet#844
The text was updated successfully, but these errors were encountered: