-
Notifications
You must be signed in to change notification settings - Fork 108
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Started implementation of random variables with PyTorch backend [WIP] #1075
base: main
Are you sure you want to change the base?
Conversation
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #1075 +/- ##
=======================================
Coverage 82.10% 82.10%
=======================================
Files 183 184 +1
Lines 47924 47970 +46
Branches 8632 8636 +4
=======================================
+ Hits 39348 39386 +38
- Misses 6410 6416 +6
- Partials 2166 2168 +2
|
static_shape = rv.type.shape | ||
batch_ndim = op.batch_ndim(node) | ||
|
||
# Try to pass static size directly to JAX |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: pytorch
# XXX replace | ||
state_ = rng["pytorch_state"] | ||
gen = torch.Generator().set_state(state_) | ||
sample = torch.bernoulli(torch.expand_copy(p, size), generator=gen) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I actually don't mind this approach! Torch has a lot of wrapping and abstraction on top of it's random generation, so if we just keep a little bit of state around it feels a bit simpler.
thunk_inputs = [] | ||
for n in self.fgraph.inputs: | ||
sinput = storage_map[n] | ||
if isinstance(sinput[0], RandomState | Generator): | ||
new_value = pytorch_typify( | ||
sinput[0], dtype=getattr(sinput[0], "dtype", None) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why is this needed?
static_shape = rv.type.shape | ||
batch_ndim = op.batch_ndim(node) | ||
|
||
# Try to pass static size directly to JAX |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This static size is a JAX limitation that shouldn't exist in PyTorch
state_ = rng["pytorch_state"] | ||
gen = torch.Generator().set_state(state_) | ||
sample = torch.bernoulli(torch.expand_copy(p, size), generator=gen) | ||
return (rng, sample) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It should return a new state, otherwise the draws will be the same the next time it's evaluated
# XXX replace | ||
state_ = rng["pytorch_state"] | ||
gen = torch.Generator().set_state(state_) | ||
sample = torch.bernoulli(torch.expand_copy(p, size), generator=gen) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Shouldn't it jut broadcast?, why copy?
sample = torch.bernoulli(torch.expand_copy(p, size), generator=gen) | |
sample = torch.bernoulli(torch.expand_copy(p, size), generator=gen) |
Description
Related Issue
Checklist
Type of change
📚 Documentation preview 📚: https://pytensor--1075.org.readthedocs.build/en/1075/