-
Notifications
You must be signed in to change notification settings - Fork 193
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
pythran slower than scipy for convolution of long doubles #2275
Comments
serge-sans-paille
added a commit
that referenced
this issue
Feb 5, 2025
Related to #2275, but it's not enough to match scipy.signal.convolve performance.
serge-sans-paille
added a commit
that referenced
this issue
Feb 5, 2025
Related to #2275, but it's not enough to match scipy.signal.convolve performance.
serge-sans-paille
added a commit
that referenced
this issue
Feb 5, 2025
Related to #2275, but it's not enough to match scipy.signal.convolve performance.
serge-sans-paille
added a commit
that referenced
this issue
Feb 5, 2025
Related to #2275, but it's not enough to match scipy.signal.convolve performance.
I tested it with float64s and the gap is still large.
and then
whereas with numpy it is very fast.
|
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
I was excited to see that pythran supports long doubles. I wrote the
following pythran code to compute the convolution of two arrays to
test the support.
I then tested with the array
scipy is abandoning its support for long doubles but for the moment is
still gives the correct answer if you use method='direct'
I was wondering what scipy is doing to get the speedup over pythran?
The text was updated successfully, but these errors were encountered: