Skip to content

Conversation

yewentao256
Copy link
Member

@yewentao256 yewentao256 commented Aug 18, 2025

Purpose

Warning Once for Cutlass MLA

To avoid

(EngineCore_2 pid=2626618) WARNING 08-18 16:51:26 [cutlass_mla.py:126] Forcing num_kv_splits to 1
(EngineCore_2 pid=2626618) WARNING 08-18 16:51:26 [cutlass_mla.py:126] Forcing num_kv_splits to 1
(EngineCore_2 pid=2626618) WARNING 08-18 16:51:26 [cutlass_mla.py:126] Forcing num_kv_splits to 1
(EngineCore_2 pid=2626618) WARNING 08-18 16:51:26 [cutlass_mla.py:126] Forcing num_kv_splits to 1

Copy link

👋 Hi! Thank you for contributing to the vLLM project.

💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels.

Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge.

🚀

@mergify mergify bot added the v1 label Aug 18, 2025
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request aims to reduce log spam from Cutlass MLA by replacing logger.warning with logger.warning_once. While the intention is correct, the implementation of warning_once in vllm/logger.py is flawed. It uses functools.lru_cache on a function that takes a logging.Logger object as an argument. Since Logger objects are not hashable, this will lead to a TypeError at runtime, causing a crash. This critical issue needs to be fixed in vllm/logger.py for this change to work correctly.

Signed-off-by: yewentao256 <[email protected]>
@yewentao256 yewentao256 force-pushed the wye-warning-once-cutlassMLA branch from 838a5fe to c22485c Compare August 19, 2025 00:07
@DarkLight1337 DarkLight1337 enabled auto-merge (squash) August 19, 2025 04:05
@github-actions github-actions bot added the ready ONLY add when PR is ready to merge/full CI is needed label Aug 19, 2025
@DarkLight1337 DarkLight1337 disabled auto-merge August 19, 2025 06:23
@DarkLight1337 DarkLight1337 enabled auto-merge (squash) August 19, 2025 06:24
@vllm-bot vllm-bot merged commit 90bbe0a into vllm-project:main Aug 19, 2025
43 of 50 checks passed
@yewentao256 yewentao256 deleted the wye-warning-once-cutlassMLA branch August 19, 2025 14:23
princepride pushed a commit to princepride/vllm that referenced this pull request Aug 20, 2025
divakar-amd pushed a commit to divakar-amd/vllm_upstream that referenced this pull request Aug 20, 2025
cyang49 pushed a commit to cyang49/vllm that referenced this pull request Aug 20, 2025
djmmoss pushed a commit to djmmoss/vllm that referenced this pull request Aug 21, 2025
epwalsh pushed a commit to epwalsh/vllm that referenced this pull request Aug 28, 2025
xiao-llm pushed a commit to xiao-llm/vllm that referenced this pull request Aug 28, 2025
zhewenl pushed a commit to zhewenl/vllm that referenced this pull request Aug 28, 2025
mengxingkongzhouhan pushed a commit to mengxingkongzhouhan/vllm that referenced this pull request Aug 30, 2025
zhewenl pushed a commit to zhewenl/vllm that referenced this pull request Sep 3, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ready ONLY add when PR is ready to merge/full CI is needed v1
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants