Skip to content

Conversation

rahul-tuli
Copy link
Member

@rahul-tuli rahul-tuli commented Aug 21, 2025

Summary

This PR implements Eagle3 speculative decoding support for Llama4 models, enabling faster inference through single-layer draft model speculation.

Key Features

  • Eagle3Llama4ForCausalLM: Complete implementation with single-layer draft architecture
  • SupportsEagle3 Interface: Integration with existing Llama4ForCausalLM class
  • Model Registry: Proper mappings for Eagle3 Llama4 model resolution
  • Auxiliary Hidden States: Multi-layer combination for optimal speculation
  • Vocabulary Mapping: Draft-to-target token conversion for multi-vocabulary support

Architecture

The implementation follows the established Eagle3 pattern from llama_eagle3.py with Llama4-specific enhancements:

  1. Single Decoder Layer: Uses one Llama4 decoder layer for draft token generation
  2. Hidden State Combination: Combines auxiliary states from target model layers (early, middle, late)
  3. Vocabulary Independence: Supports separate draft and target vocabularies
  4. Distributed Inference: Compatible with vLLM's tensor parallelism (for verifier)

Usage

Uses a dummy draft model that has not been trained yet, will replace with a real model once trained!!

# Example serving command for Eagle3 Llama4 speculation
VLLM_ENABLE_V1_MULTIPROCESSING=0 CUDA_VISIBLE_DEVICES="0" vllm serve \
    RedHatAI/Llama-4-Scout-17B-16E-Instruct-quantized.w4a16 \
    --tensor-parallel-size 1 \
    --gpu-memory-utilization 0.95 \
    --max-model-len 8192 \
    --speculative-config '{"method": "eagle3", "model": "nm-testing/llama4-scout-17b-eagle3-dummy-drafter", "num_speculative_tokens": 4, "draft_tensor_parallel_size": 1}' \
    --trust-remote-code \
    2>&1 | tee "$LOG_FILE"

Testing

The implementation has been tested with:

  • Model loading and initialization
  • Speculative decoding configuration
  • GPU memory optimization
  • Vocabulary mapping functionality

Copy link

👋 Hi! Thank you for contributing to the vLLM project.

💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels.

Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge.

🚀

@rahul-tuli rahul-tuli force-pushed the llama4-eagle3-drafter branch from 32f9392 to 184da35 Compare August 26, 2025 12:07
Copy link

@dsikka dsikka left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM.
Would be good to add a speculators test model

- Add Eagle3Llama4ForCausalLM model implementation
- Add SupportsEagle3 interface to Llama4ForConditionalGeneration
- Update eagle.py to support both Llama and Llama4 Eagle3 models
- Register Eagle3Llama4ForCausalLM in model registry

Signed-off-by: Rahul Tuli <[email protected]>
@rahul-tuli rahul-tuli force-pushed the llama4-eagle3-drafter branch from 184da35 to 7189cfe Compare August 29, 2025 16:55
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants