You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
After successfully implementing steps 1 and 2 of Qoala-T for our FreeSurfer V7.4.0 processed T1 scans, we noticed that the predictions for the Qoala-T scores and subsequent recommendations vary a lot when we execute the Qoala_T_B_subset_based_github R script multiple times while using the same manual quality control rating from step 1.
Our total data set encompasses 158 scans, for which we extracted the necessary information through the Stats2Table_fs7 R script.
Afterwards, we performed a manual quality control for 45 scans and decided to include 35 scans and exclude 11 scans.
When running the Qoala_T_B_subset_based_github with the dataset including our 45 manual quality control ratings, the output with the recommendations for the remaining 113 scans varied a lot every time, e.g.:
8th run: Qoala-T recommendations: Include: 108 Scans Exclude: 5 Scans Mean Qoala-T Score: 71.8
etc.
(Note. We made sure to empty the environment in between every one of these runs.)
Do you have any recommendations on how to proceed? Is this to be expected? Which run should we choose?
Thank you for your help!
The text was updated successfully, but these errors were encountered:
After successfully implementing steps 1 and 2 of Qoala-T for our FreeSurfer V7.4.0 processed T1 scans, we noticed that the predictions for the Qoala-T scores and subsequent recommendations vary a lot when we execute the Qoala_T_B_subset_based_github R script multiple times while using the same manual quality control rating from step 1.
Our total data set encompasses 158 scans, for which we extracted the necessary information through the Stats2Table_fs7 R script.
Afterwards, we performed a manual quality control for 45 scans and decided to include 35 scans and exclude 11 scans.
When running the Qoala_T_B_subset_based_github with the dataset including our 45 manual quality control ratings, the output with the recommendations for the remaining 113 scans varied a lot every time, e.g.:
etc.
(Note. We made sure to empty the environment in between every one of these runs.)
Do you have any recommendations on how to proceed? Is this to be expected? Which run should we choose?
Thank you for your help!
The text was updated successfully, but these errors were encountered: