Skip to content

Support Context Parallel for Multi Latent Attention (MLA) #1729

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 11 commits into from
Jun 10, 2025

Conversation

yuzhongw-nvidia
Copy link
Contributor

@yuzhongw-nvidia yuzhongw-nvidia commented Apr 29, 2025

Description

#1561 has already fixed the issue that the function AttnFuncWithCPAndKVP2P does not support MLA (Multi-latent attention). Specifically, #1561 pad tensor v to head_dim_qk and convert MLA to normal attention. This PR improves #1561 by removing the padding and using MLA kernels to reduce the communication and computation overhead.

Many thanks to SuperCB from xiaohongshu and RandMist from wechat team for their contributions.

Fixes # (issue)

Type of change

  • Documentation change (change only to the documentation, either a fix or a new content)
  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)
  • Breaking change (fix or feature that would cause existing functionality to not work as expected)
  • Infra/Build change
  • Code refactoring

Changes

Please list the changes introduced in this PR:

  • Change A
  • Change B

Checklist:

  • I have read and followed the contributing guidelines
  • The functionality is complete
  • I have commented my code, particularly in hard-to-understand areas
  • I have made corresponding changes to the documentation
  • My changes generate no new warnings
  • I have added tests that prove my fix is effective or that my feature works
  • New and existing unit tests pass locally with my changes

@yuzhongw-nvidia yuzhongw-nvidia changed the title Draft: [WIP] Support Context Parallel for Multi Latent Attention (MLA) Support Context Parallel for Multi Latent Attention (MLA) May 7, 2025
@yuzhongw-nvidia yuzhongw-nvidia marked this pull request as ready for review May 7, 2025 06:18
@yuzhongw-nvidia yuzhongw-nvidia force-pushed the mla_cp branch 6 times, most recently from 091edf3 to f290c61 Compare May 8, 2025 05:52
@yaox12 yaox12 requested a review from xrennvidia May 9, 2025 01:32
@xrennvidia xrennvidia requested a review from cyanguwa May 21, 2025 05:27
@yanring
Copy link

yanring commented Jun 3, 2025

Hi @cyanguwa , could you help review this PR? We aim to get CP support in MCore v0.13 (code freeze by mid-June).

@eagle705
Copy link

eagle705 commented Jun 4, 2025

Does this PR also cover the A100?

).squeeze(0)
v_part = tex.thd_read_half_tensor(
v_part.unsqueeze(0), cu_seqlens_kv_padded, 0
).squeeze(0)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

could you please answer this question?

@xrennvidia
Copy link
Collaborator

/te-ci pytorch L1

Copy link
Collaborator

@xrennvidia xrennvidia left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Approve because all CP tests passed.

@cyanguwa cyanguwa added the 2.5.0 label Jun 9, 2025
@cyanguwa
Copy link
Collaborator

cyanguwa commented Jun 9, 2025

LGTM. Just re-running the B200 test - will merge after it passes. Thanks!

@cyanguwa
Copy link
Collaborator

The CI pipeline had some issues for the B200 test, but I ran it locally and it seems to be fine. Merging!

@cyanguwa cyanguwa merged commit faee0e8 into NVIDIA:main Jun 10, 2025
25 of 27 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants