Skip to content

Conversation

anmarques
Copy link
Contributor

@anmarques anmarques commented Sep 2, 2025

[Model] This PR updates the quant_config for a Voxtral model (if existent) to map mistralai names to match the vLLM model definition.

This implementation fixes the support of models quantized in the compressed-tensors format being loaded with load_format mistralai.

Copy link

github-actions bot commented Sep 2, 2025

👋 Hi! Thank you for contributing to the vLLM project.

💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels.

Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors.

You ask your reviewers to trigger select CI tests on top of fastcheck CI.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge.

If you have any questions, please reach out to us on Slack at https://slack.vllm.ai.

🚀

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request adds support for updating quantization configurations in the Voxtral model. The changes introduce a new method to remap module names in the quantization config to match vLLM's internal naming scheme. My review found a couple of critical issues in the implementation of this remapping logic: a duplicated regex pattern that would lead to incorrect mappings, and a faulty condition combined with a missing break in a loop that would prevent quantization target lists from being updated and could cause multiple transformations on a single name. I've provided suggestions to fix these issues.

anmarques and others added 4 commits September 2, 2025 16:35
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
Signed-off-by: Alexandre Marques <[email protected]>
Signed-off-by: Alexandre Marques <[email protected]>
Signed-off-by: Alexandre Marques <[email protected]>
@mergify mergify bot added the llama Related to Llama models label Sep 2, 2025
@robertgshaw2-redhat robertgshaw2-redhat changed the title Add quantization configuration update in Voxtral model [Models][Quantization] Add quantization configuration update in Voxtral model Sep 2, 2025
Copy link
Member

@mgoin mgoin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good to me, just a few nits

@mgoin mgoin self-assigned this Sep 10, 2025
@mgoin mgoin enabled auto-merge (squash) September 10, 2025 21:37
@github-actions github-actions bot added the ready ONLY add when PR is ready to merge/full CI is needed label Sep 10, 2025
@simon-mo simon-mo merged commit 5931b7e into vllm-project:main Sep 11, 2025
38 of 40 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
llama Related to Llama models ready ONLY add when PR is ready to merge/full CI is needed
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants