Skip to content

Conversation

Wovchena
Copy link
Collaborator

@Wovchena Wovchena commented Jun 9, 2025

This PR ensures the system_message parameter is correctly forwarded through the continuous‐batching pipeline and validates this in Python tests.

  • Fixes the C++ adapter and pipeline to pass system_message to the underlying implementation
  • Cleans up stray semicolons in C++ method definitions
  • Updates Python tests to include a system_message case

@Wovchena Wovchena requested review from yatarkan and Copilot June 9, 2025 11:22
@github-actions github-actions bot added category: continuous batching Continuous batching category: LLM LLM pipeline (stateful, static) labels Jun 9, 2025
Copy link
Contributor

@Copilot Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

This PR ensures the system_message parameter is correctly forwarded through the continuous‐batching pipeline and validates this in Python tests.

  • Fixes the C++ adapter and pipeline to pass system_message to the underlying implementation
  • Cleans up stray semicolons in C++ method definitions
  • Updates Python tests to include a system_message case

Reviewed Changes

Copilot reviewed 3 out of 3 changed files in this pull request and generated no comments.

File Description
tests/python_tests/test_llm_pipeline.py Added a system_message case and a greeting question to the test list
src/cpp/src/llm/pipeline_continuous_batching_adapter.hpp Forward system_message in start_chat override
src/cpp/src/continuous_batching/pipeline.cpp Removed redundant semicolons after function definitions
Comments suppressed due to low confidence (2)

tests/python_tests/test_llm_pipeline.py:146

  • The test now includes a custom system_message but doesn't assert that the output reflects the 'Hola!' prefix. Add an assertion to verify that generated responses start with 'Hola!'.
(dict(max_new_tokens=20), "You are a helpful assistant. Start every message with 'Hola!'")

src/cpp/src/llm/pipeline_continuous_batching_adapter.hpp:207

  • [nitpick] The API signature for start_chat has changed to accept a system_message. Consider updating or adding a doc comment above this method to explain the new parameter and its expected behavior.
void start_chat(const std::string& system_message) override {

@Wovchena Wovchena added this pull request to the merge queue Jun 10, 2025
Merged via the queue into openvinotoolkit:master with commit a77bf3b Jun 10, 2025
68 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
category: continuous batching Continuous batching category: LLM LLM pipeline (stateful, static)
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants