-
Notifications
You must be signed in to change notification settings - Fork 199
Implement batched gemm wmma (RDNA batched gemm) based on wmma cshuffle v3 #2319
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Open
krithalith
wants to merge
15
commits into
ROCm:develop
Choose a base branch
from
StreamHPC:2025_06_10-implement-batched-gemm-wmma-cshuffle-v3
base: develop
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Open
Implement batched gemm wmma (RDNA batched gemm) based on wmma cshuffle v3 #2319
krithalith
wants to merge
15
commits into
ROCm:develop
from
StreamHPC:2025_06_10-implement-batched-gemm-wmma-cshuffle-v3
+1,684
−22
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
bartekxk
reviewed
Jun 10, 2025
include/ck/tensor_operation/gpu/device/impl/device_batched_gemm_wmma_cshuffle_v3.hpp
Outdated
Show resolved
Hide resolved
include/ck/tensor_operation/gpu/device/impl/device_batched_gemm_wmma_cshuffle_v3.hpp
Outdated
Show resolved
Hide resolved
include/ck/tensor_operation/gpu/device/impl/device_batched_gemm_wmma_cshuffle_v3.hpp
Outdated
Show resolved
Hide resolved
include/ck/tensor_operation/gpu/device/impl/device_batched_gemm_wmma_cshuffle_v3.hpp
Show resolved
Hide resolved
44f21c8
to
6710edb
Compare
…gemm in general to gfx11 and gfx12 categories, and split existing batched_gemm test into xdl and wmma versions. Updated profiler and instance factory. For now only adding f16-row-row-row-GemmDefault. For now actual device instance list is empty.
…leV3 and make sure it's used in the instance factory and tests. Currently the new batched device level struct cannot actually handle batching, but it does pass tests with a trivial batch size of 1, meaning that the overall structure is good.
…eV3. Batching arguments not passed to kernel yet.
…ffleV3. In principle the whole thing works now, just need to add other data types and perhaps do some cleanup.
…tching XDL (for f16).
… shapes. Some of the original test cases for batched gemm do not work based on cshuffle v3 because the dimensions are too small.
…main-k-block-loop, check compute type, packed buffer size calc. Ported new instance lists.
…ompatible test problems.
…ase + small fixups.
…ile_batched_gemm_impl() from test_batched_gemm_wmma to match latest definition of that function.
6710edb
to
d436ed1
Compare
I just rebased on develop again, and since my fix to the argument order of profile_batched_gemm_impl() was merged, I had to update the argument order in the newly introduced test_batched_gemm_wmma one last time (verified normal performance). |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Proposed changes
This MR implements batched gemm for wmma based on closely on the existing gemm wmma universal (cshuffle v3).
A new Device level struct was added called DeviceBatchedGemm_Wmma_CShuffleV3 which is very closely based on DeviceGemm_Wmma_CShuffleV3. Note that since batched gemms must inherit from the DeviceBatchedGemm base class, there is currently no support for some of the extra members that appear in the DeviceGemmV2 base class (which are not in the DeviceGemm base class). Effectively this means that k batching and permuteA/B are not supported right now. This could be resolved by making a new batched gemm baseclass with those extra features, but that will probably also require some changes in the instance factories and profiler. I believe these features are not required for now.
A new custom kernel kernel_batched_gemm_wmma_cshuffle_v3() was added which is closely based on kernel_gemm_wmma_cshuffle_v3(). To implement batching the kernel is simply called with an increased number of workgroups, by increasing the gridY dimensions from 1 to batch. The gridZ dimension could not be used since this is already used by the k batching calculations, but gridY was still completely unused.
Instances for the new operation were added to the instance factory which directly mirror those for DeviceGemm_Wmma_CShuffleV3. The instances support:
Datatypes: f16-f16-f16, bf16-bf16-bf16
Layouts: Row-Row-Row, Row-Column-Row, Column-Row-Row, Column-Column-Row
Padding: No padding, MNK padding
Pipelines: v1, v3
Gtest tests were added based on those for batched gemm xdl. They test all available datatypes and layouts. Tested on RDNA3 Radeon 7900XTX (gfx1100).
Checklist
Please put an
x
into the boxes that apply. You can also fill these out after creating the PR. If you're not sure, please don't hesitate to ask.clang-format
on all changed filesDiscussion
If this is a relatively large or complex change, feel free to start a discussion by explaining why you chose the solution you did and what alternatives you considered