Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

refinement iteration - discrepancy from paper and potential bug #820

Open
sabahsima opened this issue May 18, 2024 · 0 comments
Open

refinement iteration - discrepancy from paper and potential bug #820

sabahsima opened this issue May 18, 2024 · 0 comments

Comments

@sabahsima
Copy link

I'm looking into the code and there are 2 points in the implementation I wanted to confirm are as expected:

  1. According to the paper, when a gaussian is being cloned, the cloned gaussian will be moved in the direction of the positional gradient. From what I can see from the implementation it is cloned in the same location, and the only difference is the exp moving average that is reset to zero in the cloned gaussian (which will change the optimization result, which is the critical part here). Is this the desired behavior? why is this implemented differently than what's written in the paper?
  2. In the densify_and_prune function, the max_radii2D is being reset as part of the densification (cloning phase). This happens before the pruning check whether this radius is larger than a threshold, so this line is obsolete big_points_vs = self.max_radii2D > max_screen_size, which seems like a bug. This line does appear again in the prune function that is called in the end of the optimization, so the final gaussians will not include large gaussians, but according to the paper this should have been done in all refinement iterations.

Am I missing something?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant