Skip to content

Conversation

lkm2835
Copy link
Contributor

@lkm2835 lkm2835 commented Aug 29, 2025

Purpose

c498483#diff-167e1581ca70123b5871f46e08770bc343bfbcb897aab95603a1cd9adf9b2a35L162-R167
In this commit, the code was refactored to be cleaner and more readable, but a bug occurred because the value of apply_all_layers was changed.

For EXAONE-4.0-1.2B, the sliding_window is not applied to all layers, and rotary embeddings are always applied (apply_all_layers = True).

In contrast, for EXAONE-4.0-32B, rotary embeddings should only be applied when sliding_window is applied.
(is_sliding = True)

Test Plan

AIME25 benchmark evaluation test.

Test Result

AIME25 Before Bug Fix (N=4) After Bug Fix (N=4) Official (N=32)
EXAONE-4.0-32B 66.7 85.0 85.3

Essential Elements of an Effective PR Description Checklist
  • The purpose of the PR, such as "Fix some issue (link existing issues this PR will resolve)".
  • The test plan, such as providing test command.
  • The test results, such as pasting the results comparison before and after, or e2e results
  • (Optional) The necessary documentation update, such as updating supported_models.md and examples for a new model.
  • (Optional) Release notes update. If your change is user facing, please update the release notes draft in the Google Doc.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request addresses a bug in the application of rotary embeddings for EXAONE4 models, particularly distinguishing between model variants with and without sliding window attention. The logic is corrected to apply RoPE to all layers only when no sliding window attention is used, and conditionally on sliding window layers otherwise. This change is supported by improved benchmark results. My review focuses on a minor but important maintainability issue: a comment that has become outdated and misleading due to the logic change.


# apply rotary embeddings to every layer
self.apply_all_layers = not is_sliding
self.apply_rope_all_layers = "sliding_attention" not in config.layer_types
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

While this logic correctly fixes the bug, the comment on the preceding line (# apply rotary embeddings to every layer) is now misleading. With this change, rotary embeddings are not always applied to every layer. When sliding window attention is present in any layer, RoPE is only applied to those specific sliding window layers. Please update the comment to accurately reflect this new conditional logic to ensure code clarity and prevent future misunderstandings.

Copy link

👋 Hi! Thank you for contributing to the vLLM project.

💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels.

Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors.

You ask your reviewers to trigger select CI tests on top of fastcheck CI.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge.

If you have any questions, please reach out to us on Slack at https://slack.vllm.ai.

🚀

Signed-off-by: lkm2835 <[email protected]>
@lkm2835
Copy link
Contributor Author

lkm2835 commented Sep 2, 2025

Hi, @DarkLight1337 @hmellor.
Could you review this PR that fixes a critical accuracy drop issue?

Copy link
Member

@hmellor hmellor left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM! Thanks for the fix

@hmellor hmellor enabled auto-merge (squash) September 2, 2025 12:19
@github-actions github-actions bot added the ready ONLY add when PR is ready to merge/full CI is needed label Sep 2, 2025
@hmellor hmellor merged commit 38ba061 into vllm-project:main Sep 2, 2025
50 checks passed
akaihaoshuai pushed a commit to akaihaoshuai/vllm that referenced this pull request Sep 3, 2025
Signed-off-by: lkm2835 <[email protected]>
Signed-off-by: Harry Mellor <[email protected]>
Co-authored-by: Harry Mellor <[email protected]>
Signed-off-by: 子悬 <[email protected]>
eicherseiji pushed a commit to eicherseiji/vllm that referenced this pull request Sep 9, 2025
Signed-off-by: lkm2835 <[email protected]>
Signed-off-by: Harry Mellor <[email protected]>
Co-authored-by: Harry Mellor <[email protected]>
FeiDaLI pushed a commit to FeiDaLI/vllm that referenced this pull request Sep 25, 2025
Signed-off-by: lkm2835 <[email protected]>
Signed-off-by: Harry Mellor <[email protected]>
Co-authored-by: Harry Mellor <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ready ONLY add when PR is ready to merge/full CI is needed
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants