-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Forward motion images in Autonomous Vehicle Datasets got bad results! #804
Comments
Put them both at the same training and put all pictures. At last this green hause should be better. |
@jaco001 despite of the green house, I |
With the front view you have more transformation like scaling and more dynamic angles. So there is more potential degradation. |
When I used the autonomous driving data set for training, I discovered a very strange phenomenon. If I use the front-view camera, the training results are very poor, but if I use the right-view image data, the training results It's going to be really good. My guess is that it is difficult to reconstruct the forward motion image itself, and there are too many point clouds in one fov. In addition, when I checked the results, I found that the points in the sky were learned very low, resulting in the appearance of Some ghosts. Below are some results I did using images from the pandaset 053 sequence. I only used 10 images with ground truth poses for testing because I used the entire sequence (a total of 80 images from one perspective) and found that the training results of the front-view camera were very bad. However, The results for the right-view image are very good.
The dataset I used is here:
053_front_10_images.zip
053_right_10_images.zip
the structure is like:
I used these commands to train:
and I got these log of front and right datasets
front-view
right-view
The following videos are visualizations of the training results of the front-view camera and the right-view camera in the same scene.
3dgs_053_front_colmap-2024-05-11_17.06.04.mp4
3dgs_053_right_colmap-2024-05-11_17.06.50.mp4
I also tried train the GS using the lidar pointclouds as prior, but I got the same results that the front-view got bad results.
So what I want to know is there any way to make the training results of forward motion better?
The text was updated successfully, but these errors were encountered: