-
Notifications
You must be signed in to change notification settings - Fork 15
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Running Quoala-T on multi-site datasets #37
Comments
Hi Leonardo,
Thank you for using Qoala-T. You are correct that the Braintime based model is predicting Qoala-T scores for each individual scan. So it doesn’t include information from the input dataset to predict the output. Maybe a good start to find out what’s going on is checking the surface hole measures, as these have the highest importance values. I hope this is helpful.
Best regards,
Lara
On 27 Apr 2021, at 01:50, Leonardo Tozzi ***@***.******@***.***>> wrote:
Dear all,
Thank you for releasing this very useful software. I have a question on how to use Quoala-T on multi-site datasets. Is the Quoala-T score assigned for each individual scan, i.e. independent of all other scans in the dataset? Or is some sort of normalization across all datasets done on the input table before running the classifier?
The reason I ask is that I processed with Freesurfer a large number of T1s from different scanners/sites. I then used your scripts to collate all results in one table and then ran Quoala-T (with the Braintime model). What I am seeing is that basically an entire site gets marked as "exclude". Probably this site's sequence is quite different from the others, but it seems unlikely that all scans would be of poor quality (they are hundreds). Should I consider rerunning Quoala-T on each site separately or would this not change things?
Thank you.
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub<#37>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/AITGLADDRN6Z6IDUDIRZHW3TKX35FANCNFSM43T4RQ5A>.
|
Dear Lara, Sorry for the late reply, I have attempted to use a bias field correction on the T1s and rerun FS, but the problem persists. This took a while. |
Sorry for double posting, but I have been looking more closely at the log files and I found an interesting fact that maybe you could help me confirm. Freesurfer automatically corrects topology defects as part of recon-all using mris_fix_topology. So the defects in aseg.stats are "holes BEFORE fixing". See here an example for one of my subjects that was assigned a very low score by Quoala-T:
And in the file that your R script extracts which goes into the classifier I see for this subject: But these are holes BEFORE fixing. In your original paper, did you use the holes before fixing or after fixing (probably 0)? Because this might be what is messing with the classification. |
Dear Leonardo,
Yes indeed this was the same in fs 6.0. There might be significant changes in the surface hole correction pipeline, yet it would be best to ask the FreeSurfer team about this.
Also at this point you may want to dive into manual checking the scans, at least on a subset to see what is going on there. If the scans and segmentation of your outlier dataset look normal, this might be a good starting point for the FreeSurfer help desk, maybe some scanner parameters play a role here. I hope you will solve this soon, and may we be able to help, please let us know.
Lara
On 20 May 2021, at 20:31, Leonardo Tozzi ***@***.******@***.***>> wrote:
Sorry for double posting, but I have been looking more closely at the log files and I found an interesting fact that maybe you could help me confirm. Freesurfer automatically corrects topology defects as part of recon-all using mris_fix_topology. So the defects in aseg.stats are "holes BEFORE fixing". See here an example for one of my subjects that was assigned a very low score by Quoala-T:
# Measure lhSurfaceHoles, lhSurfaceHoles, Number of defect holes in lh surfaces prior to fixing, 81, unitless
# Measure rhSurfaceHoles, rhSurfaceHoles, Number of defect holes in rh surfaces prior to fixing, 90, unitless
And in the file that your R script extracts which goes into the classifier I see for this subject:
lhSurfaceHoles=81
rhSurfaceHoles=90
But these are holes BEFORE fixing. In your original paper, did you use the holes before fixing or after fixing (probably 0)? Because this might be what is messing with the classification.
Thank you!
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub<#37 (comment)>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/AITGLABBB6CO754HVMPRCJTTOVIQZANCNFSM43T4RQ5A>.
|
Dear all,
Thank you for releasing this very useful software. I have a question on how to use Quoala-T on multi-site datasets. Is the Quoala-T score assigned for each individual scan, i.e. independent of all other scans in the dataset? Or is some sort of normalization across all datasets done on the input table before running the classifier?
The reason I ask is that I processed with Freesurfer a large number of T1s from different scanners/sites. I then used your scripts to collate all results in one table and then ran Quoala-T (with the Braintime model). What I am seeing is that basically an entire site gets marked as "exclude". Probably this site's sequence is quite different from the others, but it seems unlikely that all scans would be of poor quality (they are hundreds). Should I consider rerunning Quoala-T on each site separately or would this not change things?
Thank you.
The text was updated successfully, but these errors were encountered: