Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

F-score is lower compared to incremental reconstruction result #65

Open
Yajiang opened this issue Dec 6, 2023 · 11 comments
Open

F-score is lower compared to incremental reconstruction result #65

Yajiang opened this issue Dec 6, 2023 · 11 comments

Comments

@Yajiang
Copy link

Yajiang commented Dec 6, 2023

Thanks for your greate work. We tested your work on our own data. We use lidar pointcloud as our groundtruth. In our own data test, we found that the distributed reconstruction result had lower F-score compared with incremental reconstruction. Could you please give any advice?

@AIBluefisher
Copy link
Owner

AIBluefisher commented Dec 6, 2023

I think you aligned the sparse point clouds with the lidar scan to compute the F1 score. I wonder if it may relate to the SfM results' point density. Can you provide more details on the metrics between the incremental reconstruction and DAGSfM? e.g. the projection errors, number of recovered camera poses, and 3D points. Moreover, you may also need to show the reconstruction results through the modified GUI (The splitter camera poses and sparse point clouds) to inspect the visual artifacts.

@Yajiang
Copy link
Author

Yajiang commented Dec 7, 2023

I think you aligned the sparse point clouds with the lidar scan to compute the F1 score. I wonder if it may relate to the SfM results' point density. Can you provide more details on the metrics between the incremental reconstruction and DAGSfM? e.g. the projection errors, number of recovered camera poses, and 3D points. Moreover, you may also need to show the reconstruction results through the modified GUI (The splitter camera poses and sparse point clouds) to inspect the visual artifacts.

Actually, we aligned the reconstructed dense pointclouds with lidar pointcloud to compute the F1 score. It mixed the SfM results with dense reconstruction. So it's a bit difficult to compare. Here is our dense pointcloud results. As you can see, they share the same world coordinates, but at some point, the distributed reconstruction result drift a bit.
incremental_vs_distributed

@Yajiang
Copy link
Author

Yajiang commented Dec 7, 2023

Here are additional info:
image
The BA result:
image
upper one is global BA of distributed reconstruction, another is incremental reconstruction.
The info of distributed reconstruction:
image

@AIBluefisher
Copy link
Owner

I think you aligned the sparse point clouds with the lidar scan to compute the F1 score. I wonder if it may relate to the SfM results' point density. Can you provide more details on the metrics between the incremental reconstruction and DAGSfM? e.g. the projection errors, number of recovered camera poses, and 3D points. Moreover, you may also need to show the reconstruction results through the modified GUI (The splitter camera poses and sparse point clouds) to inspect the visual artifacts.

Actually, we aligned the reconstructed dense pointclouds with lidar pointcloud to compute the F1 score. It mixed the SfM results with dense reconstruction. So it's a bit difficult to compare. Here is our dense pointcloud results. As you can see, they share the same world coordinates, but at some point, the distributed reconstruction result drift a bit. incremental_vs_distributed incremental_vs_distributed

I think it may be sourced from the inaccuracy of the final alignment step of our DAGSfM. As pointed out in our recent paper AdaSfM: From Coarse Global to Fine Incremental Adaptive Structure from Motion, DAGSfM may suffer from its final alignment stage, especially when matching outliers exist. AdaSfM solved this by introducing priors from global SfM. If you already have lidar scans, then you already have global priors. Then I would like to suggest align the SfM results to the coordinate frame of the lidar scans.

@AIBluefisher
Copy link
Owner

AIBluefisher commented Dec 7, 2023

Here are additional info: image The BA result: image upper one is global BA of distributed reconstruction, another is incremental reconstruction. The info of distributed reconstruction: image

The scene looks not such large. To make the reconstruciton better, you may try to increase the upperbound from 500 to 700, which decreases the block number from 3 to 2.

@AIBluefisher
Copy link
Owner

Here are additional info: image The BA result: image upper one is global BA of distributed reconstruction, another is incremental reconstruction. The info of distributed reconstruction: image

Could you visualize the camera poses with the modified GUI in DAGSfM since it can clearly show which camera poses belong to the same cluster.

@Yajiang Yajiang closed this as completed Dec 7, 2023
@Yajiang
Copy link
Author

Yajiang commented Dec 7, 2023

Here are additional info: image The BA result: image upper one is global BA of distributed reconstruction, another is incremental reconstruction. The info of distributed reconstruction: image

The scene looks not such large. To make the reconstruciton better, you may try to increase the upperbound from 500 to 700, which decreases the block number from 3 to 2.

Yes, I'm trying it now. Later I'll shared the result with you. Thanks

@Yajiang Yajiang reopened this Dec 7, 2023
@Yajiang
Copy link
Author

Yajiang commented Dec 7, 2023

Here are additional info: image The BA result: image upper one is global BA of distributed reconstruction, another is incremental reconstruction. The info of distributed reconstruction: image

The scene looks not such large. To make the reconstruciton better, you may try to increase the upperbound from 500 to 700, which decreases the block number from 3 to 2.

The good news is that the F-score is almost the same as incremental after I changed the block number from 3 to 2. But the BA cost time is more than incremental reconstruction (I only tested once here).

@Yajiang
Copy link
Author

Yajiang commented Dec 7, 2023

Here are additional info: image The BA result: image upper one is global BA of distributed reconstruction, another is incremental reconstruction. The info of distributed reconstruction: image

Could you visualize the camera poses with the modified GUI in DAGSfM since it can clearly show which camera poses belong to the same cluster.

where can I find the guideline about the visualization process ?

@AIBluefisher
Copy link
Owner

Here are additional info: image The BA result: image upper one is global BA of distributed reconstruction, another is incremental reconstruction. The info of distributed reconstruction: image

The scene looks not such large. To make the reconstruciton better, you may try to increase the upperbound from 500 to 700, which decreases the block number from 3 to 2.

The good news is that the F-score is almost the same as incremental after I changed the block number from 3 to 2. But the BA cost time is more than incremental reconstruction (I only tested once here).

The speed of DAGSfM should be comparable to the original COLMAP when the scene scale is not large (e.g. less than 3000 images in aerial scenes or 2000 images for sequential dataset). It can outperform COLMAP when processing 5000 images or even larger (you can refer to some issues that someone used DAGSfM to reconstruct scenes with about 100K images). Moreover, you can run DAGSfM in distributed mode to improve the performance.

@AIBluefisher
Copy link
Owner

AIBluefisher commented Dec 8, 2023

Here are additional info: image The BA result: image upper one is global BA of distributed reconstruction, another is incremental reconstruction. The info of distributed reconstruction: image

Could you visualize the camera poses with the modified GUI in DAGSfM since it can clearly show which camera poses belong to the same cluster.

where can I find the guideline about the visualization process ?

I think you used the original COLMAP's GUI for visualization. I made minor modifications to COLMAP's GUI, where each image has an additional cluster_id property. Therefore, images that belong to the same group will be rendered in the same color. You should use the GUI of DAGSfM and import your model as using COLMAP's GUI.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants