Skip to content

Conversation

zyongye
Copy link
Member

@zyongye zyongye commented Aug 8, 2025

No description provided.

Copy link

github-actions bot commented Aug 8, 2025

👋 Hi! Thank you for contributing to the vLLM project.

💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels.

Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge.

🚀

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request aims to make triton_kernels an optional dependency by guarding its import. My review focuses on the correctness of this implementation. I've identified a critical issue where the current change is incomplete and will likely cause runtime errors if triton_kernels is not installed. The code that depends on the optional imports also needs to be conditionally defined.

from vllm.utils import has_triton_kernels

if True:
if has_triton_kernels():
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

While this change correctly guards the imports, the code that uses these imported modules (e.g., swiglu, PrecisionConfig) also needs to be conditionally defined. Without this, if triton_kernels is not installed, the Python interpreter will raise a NameError when it tries to define functions or classes that reference these names.

To fix this, you should move all code that depends on triton_kernels inside this if block. You should also consider adding an else block to define dummy implementations or raise an ImportError for the exported names (like BatchedOAITritonExperts and triton_kernel_moe_forward) to provide a clear error message to users who try to use this functionality without triton_kernels installed.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@zyongye I think gemini's comment makes sense. Can you add something like this?

if has_triton_kernels():
    import ...
else:
    FnSpecs = None
    FusedActivation = None
...

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

But this function will never touch those lines if there's no triton kernel installed. I put an error in the FusedMoe init
https://github.com/vllm-project/vllm/blob/main/vllm/model_executor/layers/fused_moe/layer.py#L729-L730

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just tried and I have to use:

-if True:
+if has_triton_kernels():
     import triton_kernels.swiglu
     from triton_kernels.matmul_ogs import (FnSpecs, FusedActivation,
                                            PrecisionConfig, matmul_ogs)
     from triton_kernels.routing import routing
+else:
+    PrecisionConfig = None

Otherwise, I get:

(APIServer pid=5435)   File "/root/vllm/vllm/model_executor/layers/fused_moe/gpt_oss_triton_kernels_moe.py", line 146, in BatchedOAITritonExperts
(APIServer pid=5435)     w1_precision: PrecisionConfig, w2_precision: PrecisionConfig):
(APIServer pid=5435)                   ^^^^^^^^^^^^^^^
(APIServer pid=5435) NameError: name 'PrecisionConfig' is not defined

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok new change pushed

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@zyongye Actaully, we can just use TYPE_CHECKING guard. I fixed the PR.

zyongye and others added 4 commits August 8, 2025 10:42
Signed-off-by: Yongye Zhu <[email protected]>
Signed-off-by: Yongye Zhu <[email protected]>
Signed-off-by: Woosuk Kwon <[email protected]>
Copy link
Collaborator

@WoosukKwon WoosukKwon left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the fix. Will merge after it passes the pre-commit and lints.

@WoosukKwon WoosukKwon merged commit f756a68 into vllm-project:main Aug 8, 2025
5 of 9 checks passed
jinzhen-lin pushed a commit to jinzhen-lin/vllm that referenced this pull request Aug 9, 2025
…ect#22529)

Signed-off-by: Yongye Zhu <[email protected]>
Signed-off-by: Woosuk Kwon <[email protected]>
Co-authored-by: Woosuk Kwon <[email protected]>
Signed-off-by: Jinzhen Lin <[email protected]>
noamgat pushed a commit to noamgat/vllm that referenced this pull request Aug 9, 2025
…ect#22529)

Signed-off-by: Yongye Zhu <[email protected]>
Signed-off-by: Woosuk Kwon <[email protected]>
Co-authored-by: Woosuk Kwon <[email protected]>
Signed-off-by: Noam Gat <[email protected]>
paulpak58 pushed a commit to paulpak58/vllm that referenced this pull request Aug 13, 2025
…ect#22529)

Signed-off-by: Yongye Zhu <[email protected]>
Signed-off-by: Woosuk Kwon <[email protected]>
Co-authored-by: Woosuk Kwon <[email protected]>
Signed-off-by: Paul Pak <[email protected]>
@zyongye zyongye deleted the triton_kernels_guard branch August 15, 2025 05:45
diegocastanibm pushed a commit to diegocastanibm/vllm that referenced this pull request Aug 15, 2025
…ect#22529)

Signed-off-by: Yongye Zhu <[email protected]>
Signed-off-by: Woosuk Kwon <[email protected]>
Co-authored-by: Woosuk Kwon <[email protected]>
Signed-off-by: Diego-Castan <[email protected]>
yiliu30 pushed a commit to yiliu30/vllm-fork that referenced this pull request Aug 19, 2025
epwalsh pushed a commit to epwalsh/vllm that referenced this pull request Aug 28, 2025
xiao-llm pushed a commit to xiao-llm/vllm that referenced this pull request Aug 28, 2025
…ect#22529)

Signed-off-by: Yongye Zhu <[email protected]>
Signed-off-by: Woosuk Kwon <[email protected]>
Co-authored-by: Woosuk Kwon <[email protected]>
Signed-off-by: Xiao Yu <[email protected]>
zhewenl pushed a commit to zhewenl/vllm that referenced this pull request Aug 28, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants