Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature request summary lbf #216

Open
jerome-f opened this issue Jan 18, 2024 · 4 comments
Open

Feature request summary lbf #216

jerome-f opened this issue Jan 18, 2024 · 4 comments

Comments

@jerome-f
Copy link

@pcarbo I wanted to check if adding summary lbf across single_effects is feasible. Right now I pick column wise maximum of the lbf_matrix variable.

@pcarbo
Copy link
Member

pcarbo commented Jan 19, 2024

@jerome-f Sorry, I'm not clear on what you are asking for, and I'm not sure what is lbf_matrix. Do you mean lbf_variable? Could you provide a bit more detail? An example might help.

@jerome-f
Copy link
Author

Hey Peter, sorry there was a typo I meant lbf_variable matrix. What I am looking for is one lbf for each snp, right now we get lbf_variable vector for each snp across L. I am trying to sort of meta-analyze the credible sets reported across models (FINEMAP and SuSIE-RSS) using BMA. As you'd be aware susie and finemap don't always agree 1:1 on credible set configurations or PIPs. But by averaging across models you can quantify uncertainty around the specific snp.

Best
Jerome

@pcarbo
Copy link
Member

pcarbo commented Jan 19, 2024

@jerome-f The logBFs (res$lbf_variable) are based on a simple association test, so I'm not sure that's what you want if your aim is to compare fine-mapping results across different analyses. I'm not sure what is the right thing to do, but you if want to compare CSs across analyses, the PIPs (res$alpha) are probably closer to what you want, because they compare the evidence for an effect with other candidate SNPs. So for example taking apply(res$alpha,2,max) might be better.

You could also take a look at what Chris Wallace does in coloc, which uses the results of susie for colocalization.

@jerome-f
Copy link
Author

@pcarbo Thanks that makes sense. I will check out the coloc code base once again (That's where I looked at first). But broadly speaking given the same data fine mapping using different Bayesian methods will give you some what different credible set configurations and PIPs. When two methods do agree then you can be more certain about the inference but when there is disagreement it would be prudent to reconcile them such that you can attribute a confidence interval around the PIP/credible-sets etc. I haven't seen anyone really do this in the fine-mapping context.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants