-
Notifications
You must be signed in to change notification settings - Fork 13
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Reuse allocated arrays in FFBS sampler #81
Conversation
pymc3_hmm/step_methods.py
Outdated
# log_lik_t = log_lik_fn(state_seqs) | ||
log_lik_t = np.stack([log_lik_fn(np.broadcast_to(m, N)) for m in range(M)]) | ||
if self.log_lik_t is not None: | ||
log_lik_t = self.log_lik_t |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is the right idea, but we still need to recompute the log-likelihood on every call to this function, because the parameters we're sampling will change on each iteration, which—in turn—makes log_lik_fn
produce new values.
The question is: how do we recompute the log_lik_fn
steps below in-place (or as close to in-place as possible)?
In other words, log_lik_fn
is allocating and filling a large length N
array, but, now, we want it to fill an existing pre-allocated array with the values it computes.
The answer most likely involves creating our own log_lik_fn
function, instead of using the one provided by PyMC3. Ultimately, log_lik_fn
is just a compiled Theano/Aesara function, so we need to in-place updating at that level first; however, that might be pretty straightforward, because I believe we can create a shared variable that log_lik_fn
updates in-place.
For that matter, we could turn the entire computation (i.e. the stack
and range
) into a Theano/Aesara graph, use a shared variable and have it updated in-place, then call that function instead of log_lik_fn
and pass ffbs_astep
the shared variable's underlying value.
a2b711e
to
f53dcc5
Compare
Looking through the Theano graph of how we computed the log likelihood plot.
|
9fd34e4
to
f9abf35
Compare
|
I am not sure if I understand right. But I think we would like to duplicate the computation graph for each state.
|
efa3bfb
to
b447b71
Compare
82f6892
to
d76747e
Compare
Here I get the traversing function to be conditioned on SwitchingProcess. And get the location of the However, I think I am still failing the FFB unit test. The states that we are getting is not the state that we suppose to get. If @brandonwillard you can review this portion that would be great, as I am not confident that it is the right way to set the S_t. What I am trying to do is to set the St to be all 0 and all 1 for the two states just like in the original setup. However, the seems like for every run of ffb.step() the result for states are different, which makes me questions if this is right.
|
pymc3_hmm/step_methods.py
Outdated
node = queue.pop(0) | ||
if node not in visited: | ||
visited.append(node) | ||
if node.__str__() =="Alloc.0": |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Use isinstance
on node.owner.op
instead.
dce65ce
to
d26de7e
Compare
Attempts to improve FFB by storing the likelihood series within sampler.
I feel like this is probably not as simple as this but hopefully is this the right idea?
If
log_lik_fn
is subject to change across each step then this would fail.closes #76