-
-
Notifications
You must be signed in to change notification settings - Fork 10.4k
[Feature] Enable DeepGEMM Linear on B200; 1.5% E2E throughput improvement #23351
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Feature] Enable DeepGEMM Linear on B200; 1.5% E2E throughput improvement #23351
Conversation
Signed-off-by: yewentao256 <[email protected]>
Signed-off-by: yewentao256 <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request enables DeepGEMM for FP8 linear layers on B200 GPUs by updating the device capability check. The changes also include refactoring the DeepGEMM eligibility check into the vllm.utils.deep_gemm
module for better code organization. The implementation looks good, but I've found a small redundancy in the new check function that can be improved for clarity and maintainability.
👋 Hi! Thank you for contributing to the vLLM project. 💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can either: Add 🚀 |
Signed-off-by: yewentao256 <[email protected]>
@mgoin CC |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM. Should we also take the opportunity to refactor the logic in fp8.py? It seems there is no logic about deepgemm in Fp8LinearMethod and Fp8MoEMethod has this possibly different local logic
# Check for DeepGemm support.
self.allow_deep_gemm = False
if envs.VLLM_USE_DEEP_GEMM:
if not has_deep_gemm():
logger.warning_once("Failed to import DeepGemm kernels.")
elif not self.block_quant:
logger.warning_once("Model is not block quantized. Not using "
"DeepGemm kernels")
elif (is_deep_gemm_supported()):
logger.info_once("Using DeepGemm kernels for Fp8MoEMethod.")
self.allow_deep_gemm = True
else:
logger.warning_once(
"DeepGemm not supported on the current platform.")
Sounds good, I record this done and will have a separate PR for this. |
…ment (vllm-project#23351) Signed-off-by: yewentao256 <[email protected]> Signed-off-by: root <[email protected]>
…ment (vllm-project#23351) Signed-off-by: yewentao256 <[email protected]>
…ment (vllm-project#23351) Signed-off-by: yewentao256 <[email protected]> Signed-off-by: Xiao Yu <[email protected]>
…ment (vllm-project#23351) Signed-off-by: yewentao256 <[email protected]>
…ment (vllm-project#23351) Signed-off-by: yewentao256 <[email protected]>
…ment (vllm-project#23351) Signed-off-by: yewentao256 <[email protected]>
…ment (vllm-project#23351) Signed-off-by: yewentao256 <[email protected]>
Purpose
Enable DeepGEMM Linear on B200
This should also fix some cutlass linear acc error since the weight is quantized to e8m0
Test
VLLM_USE_DEEP_GEMM=1 vllm bench throughput --model Qwen/Qwen3-30B-A3B-FP8 --load-format dummy --input-len 1000 --output-len 100 --trust_remote_code --enable-expert-parallel Throughput: 40.59 requests/s, 44557.64 total tokens/s, 4059.10 output tokens/s # main Throughput: 39.97 requests/s, 43880.62 total tokens/s, 3997.42 output tokens/s