-
Notifications
You must be signed in to change notification settings - Fork 11
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
What's the actual input to NICER-SLAM? #9
Comments
You are correct, despite the implication of the paper you need more data than the simple image sequence. These are only used to calculate the loss and not as direct inputs as far as I can tell.
|
The COLMAP and the camera pose are not a necessity for running NICER-SLAM. You can check the code and find that they are used for getting the scene bound and normalizing the scene scale to -1 to 1 to satisfy the VolSDF/MonoSDF code base. The gt poses are good for debugging (e.g. gt_cam=True meaning debugging by giving every camera gt pose) and they are not involved in the tracking/mapping process. |
I have a monocular video, what are the step to run NICER-SLAM to get to the final output of the demo video with 3D map and camera trajectory?
Looks like NICER-SLAM requires not only monocular video/frames as an input, but also requires:
Then how the accuracy of these input impact the final output of NICER-SLAM?
Thank you.
The text was updated successfully, but these errors were encountered: