Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Help needed. Is there a way to call the function inside a numba @cuda.jit kernel function? if not how should i use the package on a GPU? #262

Open
anbonimus opened this issue Feb 11, 2025 · 0 comments

Comments

@anbonimus
Copy link

Hi I was trying to use the odeint funciton within a numba cuda jit kernel function, but i got the error msg

numba.core.errors.TypingError: Failed in cuda mode pipeline (step: nopython frontend)
Untyped global name 'odeint': Cannot determine Numba type of <class 'function'>

File "temp.py", line 138:
def ode_kernel(Tk_avg):

t = torch.linspace(0,tf,2000)#200000
y = odeint(atom,torch.Tensor(ini_state),t,rtol=10**-10,atol=10**-20,method='scipy_solver',options={'solver':'Radau'})
^

During: Pass nopython_type_inference

how should i make it work? If torchdiffeq is not compatible with numba, how should i use it on GPU?

Thanks in advance!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant