-
Notifications
You must be signed in to change notification settings - Fork 169
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Replace the ray-to-dem intersection algorithm with the secant method #5408
base: dev
Are you sure you want to change the base?
Conversation
Just to keep you in the loop with tests on this @oleg-alexandrov, here is partial output from one of the camstats tests:
We are exceptionally close to within our test tolerance but I don't know if there is much we can do about it |
One may try to see if decreasing the tolerance in the stopping criterion in the secant method changes anything, or one could just call these the new reference results. Given that one peeks all the way from orbit, 1e-8 is small, whether these are pixels or lon-lat. |
I realized that jigsaw also uses this function. I am not sure what it is used for, but there's a possibility that the current stopping tolerance, of 1/100th of a pixel, may be too high. Especially if one has to do numerical differentiation based on the result of this. In CSM ground-to-image, the tolerance used is about 1e-8. Here likely 1/1000 of a pixel may be a better tolerance, and likely will result in at most another iteration, given that the method converges very fast. |
@acpaquette What is the status on this? More work needed or waiting on bandwidth for additional review? |
@jlaura @oleg-alexandrov I still need to look into all of the failing tests here and do some manual verification. I should be able to get to this under continuous support but it will take a but longer to fully explore the changes |
Just to loop back around here, this seems like a good change for ISIS as it allows ISIS to use more local DEMs for both landed and orbital data. Still need to test on a network through jigsaw to see how the results differ. |
Sounds good. I would suggest making the tolerance for convergence smaller, maybe by a factor of 10, and seeing the effect. For ground-level DEMs, whose grid can grow with the distance from the rover, current tolerance looked to be on the high side. |
@oleg-alexandrov I tested it against orbital data/networks and everything seems nominal. I was also able to replicate the images produced through ASP for the landed data. I think this PR from a functional stand point is good to go. When you have time Oleg, would you be able to update the tests? No rush on our end as the grounded support hasn't been widely publicized. Thanks for an amazing addition to the code base! |
I will not be coming back to CSM / ISIS work till middle of June. I have never worked with the ISIS test suite and know very little about its structure. Can the ISIS maintainers pick up the slack? :) Likely not the first time the regression suite broke. |
@oleg-alexandrov we've always asked PR authors to write and update their own tests. People would be pretty updates if we suddenly made an exception. The process for updating docs is pretty well documented and new outside contributors seem to be able to pick up it pretty fast: https://astrogeology.usgs.gov/docs/how-to-guides/isis-developer-guides/writing-isis-tests-with-ctest-and-gtest/ |
Thanks for the user guide. I believe I updated all the tests. I also pushed making the tolerance for convergence 1/1000 pixel rather than 1/100 pixel. This had the added bonus that more tests passed even before I started looking at them. I brought up an issue in #5459. Currently some unit tests take a very long time while not doing much than computing a number or two. Multiply by the very large number of tests and potentially several iterations and that becomes a pain to deal with. |
Any thoughts on this? |
@oleg-alexandrov Apologies for little activity on this, Jenkins has been dead for sometime now. I will work in the background this week to double check tests and provide a review here |
Looking into this now that I have a handle on test failures on mac with a generic mac test config from #5517 |
This is reminder to perhaps attempt to pull this in. This pull request has been around for 6 months. A few months ago I fixed all the tests I could see failing. That meant dealing with large amount of editing of files and plugging in new numbers. Not fun. I can do that second time, if provided if a list of failures (after merging into latest), but then hoping it will be merged promptly afterwards. |
@oleg-alexandrov I am unable to run tests until merge conflicts have been resolved. |
merge Save with debug info removed Add code with no debug info Wipe more debug logic Tweak some comments Punctuation Hardening the logic for intersecting ray with DEM DemShape: Let tolerance bet 1/1000 pixel, as that is more robust Fix FunctionalTestsCamdev and FunctionalTestsCamrange Fix FunctionalTestsCampt and FunctionalTestsCamstats
I fixed the conflict. Look that the test results changed in different ways in my branch and the main one, so surely some tests will fail now. When you have the test results I will fix what is failing. |
I will run the tests now. It takes about an hour. I will post the results in the PR. |
|
Were these tests passing before? Most of these have no relation to the function I changed, which projects points from image to ground. |
@oleg-alexandrov I have been running tests for all the PRs today and have not run across these test failures. I can run the tests again and see if anything changes. Currently, there are 19 test failures in the dev branch that I did not include in this list. |
Thanks for confirming. I am running the tests locally with ctest (both
before and after my changes) as I need to see the precise failures and the
numbers. I will try to fix what I can.
…On Tue, Jul 16, 2024 at 3:42 PM Amy Stamile ***@***.***> wrote:
@oleg-alexandrov <https://github.com/oleg-alexandrov> I have been running
tests for all the PRs today and have not run across these test failures. I
can run the tests again and see if anything changes.
Currently, there are 19 test failures in the dev branch that I did not
include in this list.
—
Reply to this email directly, view it on GitHub
<#5408 (comment)>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAKDU3ELT7YS5BJZKOV6I53ZMWOV5AVCNFSM6AAAAABBYZKDI2VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDEMZRHEZTQOBTHA>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
A problem here is that it is hard to ensure I have the same dependencies,
machine, and test data as you folks.
Will it be possible to paste all the failures in detail? Such as, which
test had what value before and what value now, which is what ctest prints
when run in verbose mode, I believe. That way I can plug the right numbers
in the failing tests (if the change is small enough), and then on your side
you can rerun them again and see if they pass.
On Tue, Jul 16, 2024 at 3:48 PM Oleg Alexandrov ***@***.***>
wrote:
… Thanks for confirming. I am running the tests locally with ctest (both
before and after my changes) as I need to see the precise failures and the
numbers. I will try to fix what I can.
On Tue, Jul 16, 2024 at 3:42 PM Amy Stamile ***@***.***>
wrote:
> @oleg-alexandrov <https://github.com/oleg-alexandrov> I have been
> running tests for all the PRs today and have not run across these test
> failures. I can run the tests again and see if anything changes.
>
> Currently, there are 19 test failures in the dev branch that I did not
> include in this list.
>
> —
> Reply to this email directly, view it on GitHub
> <#5408 (comment)>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/AAKDU3ELT7YS5BJZKOV6I53ZMWOV5AVCNFSM6AAAAABBYZKDI2VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDEMZRHEZTQOBTHA>
> .
> You are receiving this because you were mentioned.Message ID:
> ***@***.***>
>
|
Here are the test logs: tests-part1.log Please ignore the follow tests that are failing on dev: |
Thank you. The useful information I was able to extract from those is: EVENTS 1721165603679 The difference between hist->Average() and 127.46759871644132 is 3.0619243602814095e-06, which exceeds .000001, where These numbers are much larger than would be expected, and I don't understand what they mean, and even if they even have anything to do do with the code I changed. Last time around I spent at least a day on this. I regretfully no longer have time to go through this process again. The tests take 2-3 hours to run on my machine. Then one has to plug and chug the right numbers in the right files. Mistakes are unavoidable so one has to rerun the tests. One has to also configure rclone and fetch gigabytes of test data, set up the env, fetch ISIS, and wait for a couple of hours for it to build. I hope the effort I spent modifying the image-to-ground code, which required a new algorithm, is appreciated, and that under the umbrella of the relevant project the ISIS developers can take it from there. |
This is an important contribution to have merged in support of work that I am doing. Given the challenges getting access to the test data #5550, the inability for external contributors to merge updates to the test data, and the fact that @oleg-alexandrov had GTests passing on or about April 8, I would propose that the remaining updates be addressed under an upcoming continuous support sprint. Additionally, I would love to see process improvements that (1) make the test data more accessible, e.g., the proposed solution from #5550, (2) process improvements such that the turn around between contributions and review are significantly shorter. |
This is a proposed fix for the issue of intersecting a ray from a rover camera with a DEM. There were several problems:
This fix first finds a representative elevation based on sampling the DEM, then uses the secant method.
One issue that needs looking into is the stopping criterion. I left it the same, at 100th of pixel resolution. For MSL the computed resolution ranged between 2 m and 10 m, which is a bit too coarse I think for MSL. I am not sure if the provided resolution() function is accurate enough.
To test this, one should first merge the ALE code, regenerate the ISD, and reattach it to the cub file with same shapefile as before. Must fetch latest ASP if doing mapprojection, as there's a bug fix there too.
I tested both the old fixed point method and new secant method after initializing with a representative elevation, rather than zero datum. The old approach does well with campt when line and sample are 200 or more, when the ray looks more vertically, but the old approach fails for line and sample between 120 and 200 (approx) when rays are more oblique. The new secant method still does well then. But it breaks down when line and sample is 120 and below (approx) when one looks closer to the horizon. This was done with dataset NLB_450492465EDR_F0310634NCAM00357M1.cub with the fixed ISD and same shape file as what @acpaquette prepared.
This is a large change. It is suggested to have a build with and another without this change and make some sanity checks for orbital sensors. This is my last week on CSM work and I will not have much time for a large amount of work if this change has a big impact.
Licensing
This project is mostly composed of free and unencumbered software released into the public domain, and we are unlikely to accept contributions that are not also released into the public domain. Somewhere near the top of each file should have these words: