-
Notifications
You must be signed in to change notification settings - Fork 227
Optimize cuda::minimum/maximum
for float
, double
, __half
, __nv_bfloat16
, __float128
#5034
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
…_bfloat16/__float128
🟨 CI finished in 2h 25m: Pass: 99%/183 | Total: 2d 03h | Avg: 16m 48s | Max: 1h 40m | Hits: 83%/297220
|
Project | |
---|---|
CCCL Infrastructure | |
CCCL Packaging | |
+/- | libcu++ |
CUB | |
Thrust | |
CUDA Experimental | |
stdpar | |
python | |
CCCL C Parallel Library | |
Catch2Helper |
Modifications in project or dependencies?
Project | |
---|---|
CCCL Infrastructure | |
+/- | CCCL Packaging |
+/- | libcu++ |
+/- | CUB |
+/- | Thrust |
+/- | CUDA Experimental |
+/- | stdpar |
+/- | python |
+/- | CCCL C Parallel Library |
+/- | Catch2Helper |
🏃 Runner counts (total jobs: 183)
# | Runner |
---|---|
125 | linux-amd64-cpu16 |
15 | windows-amd64-cpu16 |
12 | linux-arm64-cpu16 |
12 | linux-amd64-gpu-rtxa6000-latest-1 |
11 | linux-amd64-gpu-rtx2080-latest-1 |
5 | linux-amd64-gpu-h100-latest-1 |
3 | linux-amd64-gpu-rtx4090-latest-1 |
#if _CCCL_HAS_NVBF16() | ||
# include <cuda_bf16.h> | ||
#endif | ||
#if _CCCL_HAS_NVFP16() | ||
# include <cuda_fp16.h> | ||
#endif | ||
#if _CCCL_HAS_FLOAT128() | ||
# include <crt/device_fp128_functions.h> | ||
#endif |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should not include those headers, but the generic one
We should also move the fp128 one there
#if _CCCL_HAS_NVBF16() | |
# include <cuda_bf16.h> | |
#endif | |
#if _CCCL_HAS_NVFP16() | |
# include <cuda_fp16.h> | |
#endif | |
#if _CCCL_HAS_FLOAT128() | |
# include <crt/device_fp128_functions.h> | |
#endif | |
# include <cuda/std/__floating_point/nvfp_types.h> | |
#if _CCCL_HAS_FLOAT128() | |
# include <crt/device_fp128_functions.h> | |
#endif |
@@ -32,12 +32,37 @@ __host__ __device__ constexpr bool test() | |||
test<int>(1, 0, 1) && // | |||
test<int>(0, 0, 0) && // | |||
test<int>(-1, 1, 1) && // | |||
test<char>('a', 'b', 'b'); | |||
test<char>('a', 'b', 'b') && // |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please refactor this function to just call test<meow>()
individually and then return true
{ | ||
if constexpr (_CUDA_VSTD::is_same_v<_Tp, float>) | ||
{ | ||
NV_IF_TARGET(NV_IS_DEVICE, (return ::fmaxf(__lhs, __rhs);)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Important: Isn't this a behavior change for NaNs?
Before maximum{}(NaN, 1) == NaN
After: maximum{}(NaN, 1) == 1
The documentation for fmaxf
says:
Returns the larger of two floating-point arguments, treating NaNs as missing data (between a NaN and a numeric value, the numeric value is chosen).
Is this what we want? And is this consistent with any fast path in CUB that detects that the operator is maximum and then applies a custom logic, side-stepping calling this actual operator?
Description
cuda::minimum/maximum
is implemented asa < b ? a : b
. On the other hand, it produces non-optimal code for device code.This PR adds the specializations for
float
,double
,__half
,__nv_bfloat16
,__float128
.I was tempted to add support for vector types but the semantic is ambiguous, e.g.
a.x < b.x ? a : (a.y < b.y ? a : b)
orT{a.x < b.x ? a.x, a.y < b.y ? a.y : b.y}