-
-
Notifications
You must be signed in to change notification settings - Fork 10.2k
Run ruff format on a few files. #24075
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: Signed-off-by: Chenheli Hua <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request applies automated code formatting using ruff format
across three Python files. The changes are purely stylistic, improving code readability and consistency by adjusting import statements, line wrapping, and quote styles. I have reviewed the changes and found no functional modifications or issues of high or critical severity. The formatting is consistent and aligns with standard Python style guides.
Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: Signed-off-by: Chenheli Hua <[email protected]>
Head branch was pushed to by a user without write access
Signed-off-by: Chenheli Hua <[email protected]>
* 'main' of https://github.com/845473182/vllm: (457 commits) [BugFix] Fix routed_scaling_factor double mul for dots1 and glm4 MoE models (vllm-project#24132) [Misc] Add check for dual_chunk_attention (vllm-project#24070) [Doc]: fix typos in Python comments (vllm-project#24115) [Doc]: fix typos in Python comments (vllm-project#24093) [Compile] Fix Compile Warning for `w4a8_mm_entry.cu` (vllm-project#23660) fix some typos (vllm-project#24071) [V1] Wrapper which plumbs request-level logits processors into vLLM batch-level logits processing (vllm-project#23656) Upgrade xgrammar to 0.1.23 (vllm-project#22988) Update release pipeline post PyTorch 2.8.0 update (vllm-project#24073) [XPU] Fix the bug of LoRA logits on the XPU platform (vllm-project#24081) [CI/Build] Disable SiluMul NVFP4 quant fusion tests (vllm-project#24121) [Bug] R1 Accuracy: Fix `routed_scaling_factor` Double Mul Issue (vllm-project#24119) [AMD][Kernel][Bugfix] Cast offsets tensor bn to tl.int64 to avoid GPU segfault (vllm-project#23692) [CI] Enable all hf transformers baselines in test_hybrid (vllm-project#23936) [Log] Only Print Profiler Results on Rank 0 (vllm-project#23370) Fix weights loading for Apertus (vllm-project#24100) [Metrics] Deprecate TPOT in favor of ITL (vllm-project#24110) [Bugfix] Fix packed_factor missing attribute error (vllm-project#23902) Run ruff format on a few files. (vllm-project#24075) [Bugfix] Fix transform_config parsing in Compressed Tensors (vllm-project#23945) ...
Signed-off-by: Chenheli Hua <[email protected]> Signed-off-by: 子悬 <[email protected]>
Signed-off-by: Chenheli Hua <[email protected]> Signed-off-by: Matthew Bonanni <[email protected]>
Signed-off-by: Chenheli Hua <[email protected]>
Signed-off-by: Chenheli Hua <[email protected]> Signed-off-by: Shiyan Deng <[email protected]>
Signed-off-by: Chenheli Hua <[email protected]>
Signed-off-by: Chenheli Hua <[email protected]> Signed-off-by: LopezCastroRoberto <[email protected]>
Signed-off-by: Chenheli Hua <[email protected]> Signed-off-by: bruceszchen <[email protected]>
Signed-off-by: Chenheli Hua <[email protected]> Signed-off-by: bruceszchen <[email protected]>
Summary:
Related to #23449
Test Plan:
Reviewers:
@DarkLight1337
Subscribers:
Tasks:
Tags:
Purpose
Test Plan
Test Result
Essential Elements of an Effective PR Description Checklist
supported_models.md
andexamples
for a new model.