You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
There are a few straightforward things we can do to improve sampling performance:
Re-use allocated log-likelihood arrays
These arrays can be fairly large (i.e. length of the series by the number of states), and they are re-created/allocated during each sample step. If we allocate them once in thread-local storage within the sampler object and operate on them in-place, then we can dramatically reduce the costs associated with generating new, large arrays.
A Cython/Numba implementation of the FFBS step.
This would likely be a marginal improvement in most cases, but it might be possible to entirely reformulate the approach at this level and get something more out of it (e.g. better scaling in series length, perhaps). Otherwise, one big problem with this approach: individual, in-loop calls to the log-likelihood functions probably incur too much overhead to be worthwhile from C/elsewhere.
The text was updated successfully, but these errors were encountered:
#81 added some important improvements, including two forms of in-place updating for series-length arrays; however, there are still a couple important in-place opportunities that involved the Theano graphs (see the TODOs here).
There are a few straightforward things we can do to improve sampling performance:
The text was updated successfully, but these errors were encountered: