Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Started implementation of random variables with PyTorch backend [WIP] #1075

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

twaclaw
Copy link
Contributor

@twaclaw twaclaw commented Nov 10, 2024

Description

Related Issue

  • Closes #
  • Related to #

Checklist

Type of change

  • New feature / enhancement
  • Bug fix
  • Documentation
  • Maintenance
  • Other (please specify):

📚 Documentation preview 📚: https://pytensor--1075.org.readthedocs.build/en/1075/

Copy link

codecov bot commented Nov 10, 2024

Codecov Report

Attention: Patch coverage is 81.25000% with 9 lines in your changes missing coverage. Please review.

Project coverage is 82.10%. Comparing base (a570dbf) to head (85d6080).

Files with missing lines Patch % Lines
pytensor/link/pytorch/dispatch/random.py 81.57% 6 Missing and 1 partial ⚠️
pytensor/link/pytorch/dispatch/basic.py 50.00% 1 Missing and 1 partial ⚠️
Additional details and impacted files

Impacted file tree graph

@@           Coverage Diff           @@
##             main    #1075   +/-   ##
=======================================
  Coverage   82.10%   82.10%           
=======================================
  Files         183      184    +1     
  Lines       47924    47970   +46     
  Branches     8632     8636    +4     
=======================================
+ Hits        39348    39386   +38     
- Misses       6410     6416    +6     
- Partials     2166     2168    +2     
Files with missing lines Coverage Δ
pytensor/link/pytorch/dispatch/__init__.py 100.00% <100.00%> (ø)
pytensor/link/pytorch/linker.py 100.00% <100.00%> (ø)
pytensor/link/pytorch/dispatch/basic.py 93.63% <50.00%> (-0.81%) ⬇️
pytensor/link/pytorch/dispatch/random.py 81.57% <81.57%> (ø)

static_shape = rv.type.shape
batch_ndim = op.batch_ndim(node)

# Try to pass static size directly to JAX
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: pytorch

# XXX replace
state_ = rng["pytorch_state"]
gen = torch.Generator().set_state(state_)
sample = torch.bernoulli(torch.expand_copy(p, size), generator=gen)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I actually don't mind this approach! Torch has a lot of wrapping and abstraction on top of it's random generation, so if we just keep a little bit of state around it feels a bit simpler.

thunk_inputs = []
for n in self.fgraph.inputs:
sinput = storage_map[n]
if isinstance(sinput[0], RandomState | Generator):
new_value = pytorch_typify(
sinput[0], dtype=getattr(sinput[0], "dtype", None)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is this needed?

static_shape = rv.type.shape
batch_ndim = op.batch_ndim(node)

# Try to pass static size directly to JAX
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This static size is a JAX limitation that shouldn't exist in PyTorch

state_ = rng["pytorch_state"]
gen = torch.Generator().set_state(state_)
sample = torch.bernoulli(torch.expand_copy(p, size), generator=gen)
return (rng, sample)
Copy link
Member

@ricardoV94 ricardoV94 Nov 11, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It should return a new state, otherwise the draws will be the same the next time it's evaluated

# XXX replace
state_ = rng["pytorch_state"]
gen = torch.Generator().set_state(state_)
sample = torch.bernoulli(torch.expand_copy(p, size), generator=gen)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shouldn't it jut broadcast?, why copy?

Suggested change
sample = torch.bernoulli(torch.expand_copy(p, size), generator=gen)
sample = torch.bernoulli(torch.expand_copy(p, size), generator=gen)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants