Skip to content

Conversation

heheda12345
Copy link
Collaborator

@heheda12345 heheda12345 commented Aug 7, 2025

Essential Elements of an Effective PR Description Checklist

  • The purpose of the PR, such as "Fix some issue (link existing issues this PR will resolve)".
  • The test plan, such as providing test command.
  • The test results, such as pasting the results comparison before and after, or e2e results
  • (Optional) The necessary documentation update, such as updating supported_models.md and examples for a new model.

Purpose。

Support streaming in response API. Note that only part of the APIs are supported.

Test Plan

All tests in tests/v1/entrypoints/openai/responses to test whether it breaks existing model. Harmony integration is not tested yet.

Test Result

Passed

(Optional) Documentation Update

Originally authored by @simon-mo
Should be merged after #22427

@heheda12345 heheda12345 requested a review from aarnphm as a code owner August 7, 2025 06:10
Copy link

github-actions bot commented Aug 7, 2025

👋 Hi! Thank you for contributing to the vLLM project.

💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels.

Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge.

🚀

Copy link

mergify bot commented Aug 7, 2025

⚠️ The sha of the head commit of this PR conflicts with #22427. Mergify cannot evaluate rules on this PR. ⚠️

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces support for streaming in the response API, primarily for the Harmony protocol (gpt-oss models). This involves significant refactoring in serving_responses.py to handle different conversation contexts and response generation logic for Harmony and non-Harmony models. It also adds MCPToolServer for integration with external tool servers.

My review found a few critical and high-severity issues. In harmony_utils.py, there's an unsafe dictionary key access that could lead to a KeyError, and an incorrect instantiation of ResponseReasoningItem that will cause a TypeError. In tool_server.py, the new MCPToolServer has two issues: it doesn't correctly implement the new_session abstract method from its base class, which is a critical bug, and its URL construction for tool servers is not robust.

Comment on lines 143 to 154
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

The method get_tool_session should be named new_session to correctly implement the abstract method from the ToolServer base class. The current implementation will result in a TypeError at runtime because MCPToolServer does not implement the abstract method new_session.

Suggested change
@asynccontextmanager
async def get_tool_session(self, tool_name: str):
from mcp import ClientSession
from mcp.client.sse import sse_client
url = self.urls.get(tool_name)
if url:
async with sse_client(url=url) as streams, ClientSession(
*streams) as session:
await session.initialize()
yield session
else:
logger.warning("Tool %s not found", tool_name)
@asynccontextmanager
async def new_session(self, tool_name: str):
from mcp import ClientSession
from mcp.client.sse import sse_client
url = self.urls.get(tool_name)
if url:
async with sse_client(url=url) as streams, ClientSession(
*streams) as session:
await session.initialize()
yield session
else:
logger.warning("Tool %s not found", tool_name)

Comment on lines 195 to 197
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The direct access browser_call["pattern"] is unsafe and can lead to a KeyError if the 'pattern' key is missing in the browser_call dictionary. This is inconsistent with the usage of .get() for other keys in this function, such as 'query' and 'url'. To prevent potential crashes, consider using .get() and handling the case where the key might be missing, or wrapping this access in a try-except block for better error handling.

Comment on lines 236 to 245
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The ResponseReasoningItem is being instantiated with a text parameter, but the openai.types.responses.ResponseReasoningItem class does not accept text as a direct argument. It expects a content argument which should be a list of Content objects (like ResponseReasoningTextContent). This will cause a TypeError at runtime.

Suggested change
reasoning_item = ResponseReasoningItem(
id=f"rs_{random_uuid()}",
summary=[],
type="reasoning",
text=content.text,
status=None,
)
reasoning_item = ResponseReasoningItem(
id=f"rs_{random_uuid()}",
summary=[],
type="reasoning",
content=[
ResponseReasoningTextContent(text=content.text,
type="reasoning_text")
],
status=None,
)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The URL construction url = f"http://{url}/sse" is not robust. It unconditionally prepends http://, which will lead to incorrect URLs if the user provides a full URL with a scheme (e.g., http://... or https://...) in the --tool-server argument. You should check if the URL already has a scheme before prepending http://.

Copy link

mergify bot commented Aug 7, 2025

This pull request has merge conflicts that must be resolved before it can be
merged. Please rebase the PR, @heheda12345.

https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork

Signed-off-by: Chen Zhang <[email protected]>
Signed-off-by: Chen Zhang <[email protected]>
@mergify mergify bot added the gpt-oss Related to GPT-OSS models label Aug 11, 2025
Comment on lines +1169 to +1170
if False:
yield
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is if False needed?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes. This tricks Python to think this is a generator but in fact it immediately returns.

Comment on lines 988 to 989
# TODO: migrate this to
# ResponseReasoningTextContent for now
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this TODO already done?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think so. CC @simon-mo

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Correct. You can remove the TODO.

logprobs=[],
),
))
# TODO: migrate to OpenAI types once updated.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ditto

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is done. Removed the comments.

Copy link
Collaborator

@WoosukKwon WoosukKwon left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM overall. Can we explicitly specify which APIs are supported or unsupported yet?

Signed-off-by: Chen Zhang <[email protected]>
@heheda12345
Copy link
Collaborator Author

The supported APIs are listed here #22554 . All other APIs are unverified.

Signed-off-by: Chen Zhang <[email protected]>
Signed-off-by: Chen Zhang <[email protected]>
@heheda12345
Copy link
Collaborator Author

@WoosukKwon addressed your comments.

@simon-mo simon-mo enabled auto-merge (squash) August 12, 2025 00:40
@github-actions github-actions bot added the ready ONLY add when PR is ready to merge/full CI is needed label Aug 12, 2025
@simon-mo simon-mo disabled auto-merge August 12, 2025 00:46
@simon-mo simon-mo merged commit 95a935f into vllm-project:main Aug 12, 2025
15 of 51 checks passed
paulpak58 pushed a commit to paulpak58/vllm that referenced this pull request Aug 13, 2025
diegocastanibm pushed a commit to diegocastanibm/vllm that referenced this pull request Aug 15, 2025
yiliu30 pushed a commit to yiliu30/vllm-fork that referenced this pull request Aug 19, 2025
epwalsh pushed a commit to epwalsh/vllm that referenced this pull request Aug 28, 2025
xiao-llm pushed a commit to xiao-llm/vllm that referenced this pull request Aug 28, 2025
zhewenl pushed a commit to zhewenl/vllm that referenced this pull request Aug 28, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
frontend gpt-oss Related to GPT-OSS models ready ONLY add when PR is ready to merge/full CI is needed v1
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants