How would you optimize discrete random variable parameters using an external error function in pgmpy? #1750
Unanswered
ferencbartok
asked this question in
Q&A
Replies: 1 comment 3 replies
-
@ferencbartok This sounds a lot like iterative proportional fitting to me. Could you have a look at the problem statement described in: https://arxiv.org/pdf/1207.1356 (the first paragraph of the introduction describes the problem setting). If that is exactly your usecase, I have a local implementation of the algorithm described in the paper that I can share or add to pgmpy. |
Beta Was this translation helpful? Give feedback.
3 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I have a Bayesian network with discrete random variables. The initial probabilities and conditional probabilities for the variables are set by experts. There are external requirements that tell us based on specific sets of evidence (not for all variables) what the inferred probabilities should be for a set of nodes. I’d like to optimize the variable parameters so that as many requirements are satisfied as possible.
Example: you have A, B, C, D network, where A is the parent of B and C. B and C are the parents of D. Let’s say there is a requirement that if A=1 then D should be 1 with about 65%. I’d like to calculate the parameters of B, C (and sometimes D) so that D=1 is around 65%.
In this example parameter learning is done via generating data. How could I incorporate an error function and minimize that during parameter learning? Do I have to generate data with that in mind or would there be another way you think?
The error function would look something like this:
Any ideas?
Beta Was this translation helpful? Give feedback.
All reactions