Skip to content

Conversation

DarkLight1337
Copy link
Member

@DarkLight1337 DarkLight1337 commented Sep 11, 2025

Purpose

#21088 incorrectly imports the deprecated CacheConfig from transformers library instead of vLLM, which breaks Whisper model (and any models that use Whisper vLLM impl as encoder) when using latest transformers version.

FIX https://buildkite.com/vllm/ci/builds/30279/steps/canvas?sid=019936ef-4094-4ebf-85c3-047098b7a6ee

Test Plan

Test Result


Essential Elements of an Effective PR Description Checklist
  • The purpose of the PR, such as "Fix some issue (link existing issues this PR will resolve)".
  • The test plan, such as providing test command.
  • The test results, such as pasting the results comparison before and after, or e2e results
  • (Optional) The necessary documentation update, such as updating supported_models.md and examples for a new model.
  • (Optional) Release notes update. If your change is user facing, please update the release notes draft in the Google Doc.

@DarkLight1337 DarkLight1337 added the ready ONLY add when PR is ready to merge/full CI is needed label Sep 11, 2025
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request correctly fixes an incorrect import of CacheConfig in vllm/attention/layers/cross_attention.py. The change replaces the import from the transformers library with the correct one from vllm.config. The implementation is clean and directly addresses the bug described. I have no further comments.

@DarkLight1337 DarkLight1337 added this to the v0.10.2 milestone Sep 11, 2025
@heheda12345 heheda12345 enabled auto-merge (squash) September 11, 2025 05:46
@heheda12345
Copy link
Collaborator

heheda12345 commented Sep 11, 2025

Sorry. Same problem happened again. #23459
Is there a way to avoid that?

@DarkLight1337
Copy link
Member Author

Once we upgrade vLLM's transformers version, mypy should be able to detect the missing import

from vllm.attention.layer import Attention
from vllm.attention.selector import get_attn_backend
from vllm.config import VllmConfig
from vllm.config import CacheConfig, VllmConfig
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Small nit but canonical imports are preferred

Suggested change
from vllm.config import CacheConfig, VllmConfig
from vllm.config import VllmConfig
from vllm.config.cache import CacheConfig

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's address this separately

@vllm-bot vllm-bot merged commit 6aeb1da into vllm-project:main Sep 11, 2025
43 of 45 checks passed
@russellb
Copy link
Member

sorry, and thank you!

skyloevil pushed a commit to skyloevil/vllm that referenced this pull request Sep 13, 2025
dsxsteven pushed a commit to dsxsteven/vllm_splitPR that referenced this pull request Sep 15, 2025
FeiDaLI pushed a commit to FeiDaLI/vllm that referenced this pull request Sep 25, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ready ONLY add when PR is ready to merge/full CI is needed
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants