Skip to content

Conversation

huachenheli
Copy link
Contributor

@huachenheli huachenheli commented Sep 2, 2025

Summary:
Related to #23449

Test Plan:

(vllm) [[email protected] ~/github/vllm (lint)]$ ruff format tests/entrypoints/test_chat_utils.py
1 file reformatted
(vllm) [[email protected] ~/github/vllm (lint)]$ ruff format vllm/entrypoints/chat_utils.py
1 file reformatted
(vllm) [[email protected] ~/github/vllm (lint)]$ ruff format vllm/entrypoints/openai/serving_engine.py
1 file reformatted

Reviewers:
@DarkLight1337

Subscribers:

Tasks:

Tags:

Purpose

Test Plan

Test Result


Essential Elements of an Effective PR Description Checklist
  • The purpose of the PR, such as "Fix some issue (link existing issues this PR will resolve)".
  • The test plan, such as providing test command.
  • The test results, such as pasting the results comparison before and after, or e2e results
  • (Optional) The necessary documentation update, such as updating supported_models.md and examples for a new model.
  • (Optional) Release notes update. If your change is user facing, please update the release notes draft in the Google Doc.

Summary:

Test Plan:

Reviewers:

Subscribers:

Tasks:

Tags:
Signed-off-by: Chenheli Hua <[email protected]>
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request applies automated code formatting using ruff format across three Python files. The changes are purely stylistic, improving code readability and consistency by adjusting import statements, line wrapping, and quote styles. I have reviewed the changes and found no functional modifications or issues of high or critical severity. The formatting is consistent and aligns with standard Python style guides.

@DarkLight1337 DarkLight1337 enabled auto-merge (squash) September 2, 2025 04:28
@huachenheli huachenheli changed the title Run lint on a few files. Run ruff format on a few files. Sep 2, 2025
@github-actions github-actions bot added the ready ONLY add when PR is ready to merge/full CI is needed label Sep 2, 2025
Summary:

Test Plan:

Reviewers:

Subscribers:

Tasks:

Tags:
Signed-off-by: Chenheli Hua <[email protected]>
auto-merge was automatically disabled September 2, 2025 05:12

Head branch was pushed to by a user without write access

@DarkLight1337 DarkLight1337 enabled auto-merge (squash) September 2, 2025 15:47
@DarkLight1337 DarkLight1337 merged commit f399182 into vllm-project:main Sep 2, 2025
40 checks passed
rzabarazesh pushed a commit to rzabarazesh/vllm that referenced this pull request Sep 2, 2025
845473182 pushed a commit to 845473182/vllm that referenced this pull request Sep 3, 2025
* 'main' of https://github.com/845473182/vllm: (457 commits)
  [BugFix] Fix routed_scaling_factor double mul for dots1 and glm4 MoE models (vllm-project#24132)
  [Misc] Add check for dual_chunk_attention (vllm-project#24070)
  [Doc]: fix typos in Python comments (vllm-project#24115)
  [Doc]: fix typos in Python comments (vllm-project#24093)
  [Compile] Fix Compile Warning for `w4a8_mm_entry.cu` (vllm-project#23660)
  fix some typos (vllm-project#24071)
  [V1] Wrapper which plumbs request-level logits processors into vLLM batch-level logits processing (vllm-project#23656)
  Upgrade xgrammar to 0.1.23 (vllm-project#22988)
  Update release pipeline post PyTorch 2.8.0 update (vllm-project#24073)
  [XPU] Fix the bug of LoRA logits on the XPU platform (vllm-project#24081)
  [CI/Build] Disable SiluMul NVFP4 quant fusion tests (vllm-project#24121)
  [Bug] R1 Accuracy: Fix `routed_scaling_factor` Double Mul Issue (vllm-project#24119)
  [AMD][Kernel][Bugfix] Cast offsets tensor bn to tl.int64 to avoid GPU segfault (vllm-project#23692)
  [CI] Enable all hf transformers baselines in test_hybrid (vllm-project#23936)
  [Log] Only Print Profiler Results on Rank 0 (vllm-project#23370)
  Fix weights loading for Apertus (vllm-project#24100)
  [Metrics] Deprecate TPOT in favor of ITL (vllm-project#24110)
  [Bugfix] Fix packed_factor missing attribute error (vllm-project#23902)
  Run ruff format on a few files. (vllm-project#24075)
  [Bugfix] Fix transform_config parsing in Compressed Tensors (vllm-project#23945)
  ...
akaihaoshuai pushed a commit to akaihaoshuai/vllm that referenced this pull request Sep 3, 2025
MatthewBonanni pushed a commit to MatthewBonanni/vllm that referenced this pull request Sep 3, 2025
Signed-off-by: Chenheli Hua <[email protected]>
Signed-off-by: Matthew Bonanni <[email protected]>
MatthewBonanni pushed a commit to MatthewBonanni/vllm that referenced this pull request Sep 3, 2025
842974287 pushed a commit to 842974287/vllm that referenced this pull request Sep 3, 2025
Signed-off-by: Chenheli Hua <[email protected]>
Signed-off-by: Shiyan Deng <[email protected]>
eicherseiji pushed a commit to eicherseiji/vllm that referenced this pull request Sep 9, 2025
LopezCastroRoberto pushed a commit to LopezCastroRoberto/vllm that referenced this pull request Sep 11, 2025
Signed-off-by: Chenheli Hua <[email protected]>
Signed-off-by: LopezCastroRoberto <[email protected]>
cboss6 pushed a commit to cboss6/vllm that referenced this pull request Sep 16, 2025
Signed-off-by: Chenheli Hua <[email protected]>
Signed-off-by: bruceszchen <[email protected]>
cboss6 pushed a commit to cboss6/vllm that referenced this pull request Sep 16, 2025
Signed-off-by: Chenheli Hua <[email protected]>
Signed-off-by: bruceszchen <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
frontend ready ONLY add when PR is ready to merge/full CI is needed
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants