Skip to content

Conversation

didier-durand
Copy link
Contributor

@didier-durand didier-durand commented Sep 2, 2025

Purpose

Improve quality of documentation for Python by eliminating typos: see commit diffs for details.

Test Plan

N/A

Test Result

N/A

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request focuses on improving documentation quality by fixing various typos and grammatical errors in comments and docstrings across the codebase. The changes are correct and enhance the readability and maintainability of the code. I have no further suggestions, and the pull request is ready to be merged.

@mergify mergify bot added multi-modality Related to multi-modality (#4194) structured-output speculative-decoding labels Sep 2, 2025
@mergify mergify bot added v1 tpu Related to Google TPUs labels Sep 2, 2025
@russellb russellb enabled auto-merge (squash) September 2, 2025 13:57
@github-actions github-actions bot added the ready ONLY add when PR is ready to merge/full CI is needed label Sep 2, 2025
Copy link
Collaborator

@yewentao256 yewentao256 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, thanks for the work!

@didier-durand
Copy link
Contributor Author

@yewentao256 : my pleasure!

@russellb russellb merged commit d7e1e59 into vllm-project:main Sep 3, 2025
60 checks passed
mayuyuace pushed a commit to mayuyuace/vllm that referenced this pull request Sep 3, 2025
845473182 pushed a commit to 845473182/vllm that referenced this pull request Sep 3, 2025
* 'main' of https://github.com/845473182/vllm: (457 commits)
  [BugFix] Fix routed_scaling_factor double mul for dots1 and glm4 MoE models (vllm-project#24132)
  [Misc] Add check for dual_chunk_attention (vllm-project#24070)
  [Doc]: fix typos in Python comments (vllm-project#24115)
  [Doc]: fix typos in Python comments (vllm-project#24093)
  [Compile] Fix Compile Warning for `w4a8_mm_entry.cu` (vllm-project#23660)
  fix some typos (vllm-project#24071)
  [V1] Wrapper which plumbs request-level logits processors into vLLM batch-level logits processing (vllm-project#23656)
  Upgrade xgrammar to 0.1.23 (vllm-project#22988)
  Update release pipeline post PyTorch 2.8.0 update (vllm-project#24073)
  [XPU] Fix the bug of LoRA logits on the XPU platform (vllm-project#24081)
  [CI/Build] Disable SiluMul NVFP4 quant fusion tests (vllm-project#24121)
  [Bug] R1 Accuracy: Fix `routed_scaling_factor` Double Mul Issue (vllm-project#24119)
  [AMD][Kernel][Bugfix] Cast offsets tensor bn to tl.int64 to avoid GPU segfault (vllm-project#23692)
  [CI] Enable all hf transformers baselines in test_hybrid (vllm-project#23936)
  [Log] Only Print Profiler Results on Rank 0 (vllm-project#23370)
  Fix weights loading for Apertus (vllm-project#24100)
  [Metrics] Deprecate TPOT in favor of ITL (vllm-project#24110)
  [Bugfix] Fix packed_factor missing attribute error (vllm-project#23902)
  Run ruff format on a few files. (vllm-project#24075)
  [Bugfix] Fix transform_config parsing in Compressed Tensors (vllm-project#23945)
  ...
842974287 pushed a commit to 842974287/vllm that referenced this pull request Sep 3, 2025
eicherseiji pushed a commit to eicherseiji/vllm that referenced this pull request Sep 9, 2025
LopezCastroRoberto pushed a commit to LopezCastroRoberto/vllm that referenced this pull request Sep 11, 2025
Signed-off-by: Didier Durand <[email protected]>
Signed-off-by: LopezCastroRoberto <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
multi-modality Related to multi-modality (#4194) ready ONLY add when PR is ready to merge/full CI is needed speculative-decoding structured-output tpu Related to Google TPUs v1
Projects
Status: Done
Development

Successfully merging this pull request may close these issues.

3 participants