Skip to content

Conversation

LucasWilkinson
Copy link
Collaborator

@LucasWilkinson LucasWilkinson commented Jul 27, 2025

Essential Elements of an Effective PR Description Checklist

  • The purpose of the PR, such as "Fix some issue (link existing issues this PR will resolve)".
  • The test plan, such as providing test command.
  • The test results, such as pasting the results comparison before and after, or e2e results
  • (Optional) The necessary documentation update, such as updating supported_models.md and examples for a new model.

Purpose

Merge: vllm-project/FlashMLA#3 first

Fix an IMA that occurs when using FlashMLA with full-cudagraphs and wide-ep

Also updates FlashMLA (i.e. #17027) since the FlashMLA changes were made on top of that. #17027 was back-burnered since it shows a slight slowdown in the TP attention case but should provide speedup for DP attention.

Test Plan

Test was failing on an llm-d benchmark

Test Result

Fixes the llm-d benchmark

(Optional) Documentation Update

Signed-off-by: Lucas Wilkinson <[email protected]>
Copy link

👋 Hi! Thank you for contributing to the vLLM project.

💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels.

Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge.

🚀

@mergify mergify bot added the v1 label Jul 27, 2025
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request correctly addresses a potential issue with FlashMLA under full CUDA graph capture, especially in distributed settings. By moving the buffer allocation from a lazy, in-place approach to a pre-allocation strategy in the __init__ method, the code becomes more robust and compliant with CUDA graph requirements. The changes are logical and well-implemented. I have one suggestion to replace a magic number with a constant to improve long-term maintainability.

tlrmchlsmth added a commit to tlrmchlsmth/llm-d-cks-image that referenced this pull request Jul 27, 2025
Signed-off-by: Lucas Wilkinson <[email protected]>
Signed-off-by: Lucas Wilkinson <[email protected]>
Signed-off-by: Lucas Wilkinson <[email protected]>
Signed-off-by: Lucas Wilkinson <[email protected]>
Signed-off-by: Lucas Wilkinson <[email protected]>
@mergify mergify bot added the ci/build label Aug 1, 2025
@LucasWilkinson LucasWilkinson force-pushed the lwilkinson/fix-flashmla-full-cudagraph branch from cfe7b52 to 7492f99 Compare August 4, 2025 19:58
Signed-off-by: Lucas Wilkinson <[email protected]>
@LucasWilkinson LucasWilkinson force-pushed the lwilkinson/fix-flashmla-full-cudagraph branch from 7492f99 to 1035a64 Compare August 4, 2025 20:08
Signed-off-by: Lucas Wilkinson <[email protected]>
Signed-off-by: Lucas Wilkinson <[email protected]>
@LucasWilkinson LucasWilkinson changed the title [BugFix] Potential fix for FlashMLA full cuda-graph + DP [BugFix] Fix IMA FlashMLA full cuda-graph and DP + Update FlashMLA Aug 5, 2025
Signed-off-by: Lucas Wilkinson <[email protected]>
Signed-off-by: Lucas Wilkinson <[email protected]>
Copy link
Member

@yewentao256 yewentao256 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

vllm serve deepseek-ai/DeepSeek-V2-Lite --port 9256 --enable-expert-parallel --data-parallel-size 2 --trust-remote-code -O '{"full_cuda_graph": true}' --cuda-graph-sizes 16 32 64 128 256 512

Originally:

(EngineCore_0 pid=94527) AssertionError
(EngineCore_1 pid=94528)     answer = run_method(self.driver_worker, method, args, kwargs)
(EngineCore_1 pid=94528)              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_1 pid=94528)   File "/home/wentao/vllm/vllm/utils/__init__.py", line 2948, in run_method
(EngineCore_1 pid=94528)     return func(*args, **kwargs)
(EngineCore_1 pid=94528)            ^^^^^^^^^^^^^^^^^^^^^
(EngineCore_1 pid=94528)   File "/home/wentao/vllm/vllm/v1/worker/gpu_worker.py", line 330, in compile_or_warm_up_model
(EngineCore_1 pid=94528)     self.model_runner._dummy_run(
(EngineCore_1 pid=94528)   File "/home/wentao/.venv/lib/python3.12/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
(EngineCore_1 pid=94528)     return func(*args, **kwargs)
(EngineCore_1 pid=94528)            ^^^^^^^^^^^^^^^^^^^^^
(EngineCore_1 pid=94528)   File "/home/wentao/vllm/vllm/v1/worker/gpu_model_runner.py", line 2206, in _dummy_run
(EngineCore_1 pid=94528)     .build_for_cudagraph_capture(common_attn_metadata)
(EngineCore_1 pid=94528)      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_1 pid=94528)   File "/home/wentao/vllm/vllm/v1/attention/backends/mla/common.py", line 580, in build_for_cudagraph_capture
(EngineCore_1 pid=94528)     return self.build(0, m)
(EngineCore_1 pid=94528)            ^^^^^^^^^^^^^^^^
(EngineCore_1 pid=94528)   File "/home/wentao/vllm/vllm/v1/attention/backends/mla/common.py", line 705, in build
(EngineCore_1 pid=94528)     decode_metadata = self._build_decode(
(EngineCore_1 pid=94528)                       ^^^^^^^^^^^^^^^^^^^
(EngineCore_1 pid=94528)   File "/home/wentao/vllm/vllm/v1/attention/backends/mla/flashmla.py", line 100, in _build_decode
(EngineCore_1 pid=94528)     assert n <= self.cg_buf_num_splits.size(0)
(EngineCore_1 pid=94528)            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_1 pid=94528) AssertionError

Now:

(APIServer pid=126325) INFO:     Started server process [126325]
(APIServer pid=126325) INFO:     Waiting for application startup.
(APIServer pid=126325) INFO:     Application startup complete.

So I think this PR fixed the issue, thanks for the work! @tlrmchlsmth Could you trigger CI?

@LucasWilkinson LucasWilkinson added the ready ONLY add when PR is ready to merge/full CI is needed label Aug 7, 2025
@LucasWilkinson LucasWilkinson enabled auto-merge (squash) August 7, 2025 20:12
@mgoin mgoin added bug Something isn't working deepseek Related to DeepSeek models labels Aug 8, 2025
Copy link
Member

@mgoin mgoin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@simon-mo simon-mo disabled auto-merge August 8, 2025 23:09
@simon-mo simon-mo merged commit cd9b9de into vllm-project:main Aug 8, 2025
63 of 74 checks passed
jinzhen-lin pushed a commit to jinzhen-lin/vllm that referenced this pull request Aug 9, 2025
…llm-project#21691)

Signed-off-by: Lucas Wilkinson <[email protected]>
Co-authored-by: yewentao256 <[email protected]>
Co-authored-by: Wentao Ye <[email protected]>
Signed-off-by: Jinzhen Lin <[email protected]>
noamgat pushed a commit to noamgat/vllm that referenced this pull request Aug 9, 2025
…llm-project#21691)

Signed-off-by: Lucas Wilkinson <[email protected]>
Co-authored-by: yewentao256 <[email protected]>
Co-authored-by: Wentao Ye <[email protected]>
Signed-off-by: Noam Gat <[email protected]>
paulpak58 pushed a commit to paulpak58/vllm that referenced this pull request Aug 13, 2025
…llm-project#21691)

Signed-off-by: Lucas Wilkinson <[email protected]>
Co-authored-by: yewentao256 <[email protected]>
Co-authored-by: Wentao Ye <[email protected]>
Signed-off-by: Paul Pak <[email protected]>
diegocastanibm pushed a commit to diegocastanibm/vllm that referenced this pull request Aug 15, 2025
…llm-project#21691)

Signed-off-by: Lucas Wilkinson <[email protected]>
Co-authored-by: yewentao256 <[email protected]>
Co-authored-by: Wentao Ye <[email protected]>
Signed-off-by: Diego-Castan <[email protected]>
yiliu30 pushed a commit to yiliu30/vllm-fork that referenced this pull request Aug 19, 2025
epwalsh pushed a commit to epwalsh/vllm that referenced this pull request Aug 28, 2025
xiao-llm pushed a commit to xiao-llm/vllm that referenced this pull request Aug 28, 2025
…llm-project#21691)

Signed-off-by: Lucas Wilkinson <[email protected]>
Co-authored-by: yewentao256 <[email protected]>
Co-authored-by: Wentao Ye <[email protected]>
Signed-off-by: Xiao Yu <[email protected]>
zhewenl pushed a commit to zhewenl/vllm that referenced this pull request Aug 28, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working ci/build deepseek Related to DeepSeek models ready ONLY add when PR is ready to merge/full CI is needed v1
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants