-
-
Notifications
You must be signed in to change notification settings - Fork 10.4k
[gpt-oss] Support streaming in response API #22431
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
👋 Hi! Thank you for contributing to the vLLM project. 💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can either: Add 🚀 |
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request introduces support for streaming in the response API, primarily for the Harmony protocol (gpt-oss models). This involves significant refactoring in serving_responses.py
to handle different conversation contexts and response generation logic for Harmony and non-Harmony models. It also adds MCPToolServer
for integration with external tool servers.
My review found a few critical and high-severity issues. In harmony_utils.py
, there's an unsafe dictionary key access that could lead to a KeyError
, and an incorrect instantiation of ResponseReasoningItem
that will cause a TypeError
. In tool_server.py
, the new MCPToolServer
has two issues: it doesn't correctly implement the new_session
abstract method from its base class, which is a critical bug, and its URL construction for tool servers is not robust.
vllm/entrypoints/tool_server.py
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The method get_tool_session
should be named new_session
to correctly implement the abstract method from the ToolServer
base class. The current implementation will result in a TypeError
at runtime because MCPToolServer
does not implement the abstract method new_session
.
@asynccontextmanager | |
async def get_tool_session(self, tool_name: str): | |
from mcp import ClientSession | |
from mcp.client.sse import sse_client | |
url = self.urls.get(tool_name) | |
if url: | |
async with sse_client(url=url) as streams, ClientSession( | |
*streams) as session: | |
await session.initialize() | |
yield session | |
else: | |
logger.warning("Tool %s not found", tool_name) | |
@asynccontextmanager | |
async def new_session(self, tool_name: str): | |
from mcp import ClientSession | |
from mcp.client.sse import sse_client | |
url = self.urls.get(tool_name) | |
if url: | |
async with sse_client(url=url) as streams, ClientSession( | |
*streams) as session: | |
await session.initialize() | |
yield session | |
else: | |
logger.warning("Tool %s not found", tool_name) |
vllm/entrypoints/harmony_utils.py
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The direct access browser_call["pattern"]
is unsafe and can lead to a KeyError
if the 'pattern' key is missing in the browser_call
dictionary. This is inconsistent with the usage of .get()
for other keys in this function, such as 'query' and 'url'. To prevent potential crashes, consider using .get()
and handling the case where the key might be missing, or wrapping this access in a try-except
block for better error handling.
vllm/entrypoints/harmony_utils.py
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The ResponseReasoningItem
is being instantiated with a text
parameter, but the openai.types.responses.ResponseReasoningItem
class does not accept text
as a direct argument. It expects a content
argument which should be a list of Content
objects (like ResponseReasoningTextContent
). This will cause a TypeError
at runtime.
reasoning_item = ResponseReasoningItem( | |
id=f"rs_{random_uuid()}", | |
summary=[], | |
type="reasoning", | |
text=content.text, | |
status=None, | |
) | |
reasoning_item = ResponseReasoningItem( | |
id=f"rs_{random_uuid()}", | |
summary=[], | |
type="reasoning", | |
content=[ | |
ResponseReasoningTextContent(text=content.text, | |
type="reasoning_text") | |
], | |
status=None, | |
) |
vllm/entrypoints/tool_server.py
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The URL construction url = f"http://{url}/sse"
is not robust. It unconditionally prepends http://
, which will lead to incorrect URLs if the user provides a full URL with a scheme (e.g., http://...
or https://...
) in the --tool-server
argument. You should check if the URL already has a scheme before prepending http://
.
This pull request has merge conflicts that must be resolved before it can be |
Signed-off-by: Chen Zhang <[email protected]>
Signed-off-by: Chen Zhang <[email protected]>
if False: | ||
yield |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is if False
needed?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes. This tricks Python to think this is a generator but in fact it immediately returns.
# TODO: migrate this to | ||
# ResponseReasoningTextContent for now |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this TODO already done?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think so. CC @simon-mo
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Correct. You can remove the TODO.
logprobs=[], | ||
), | ||
)) | ||
# TODO: migrate to OpenAI types once updated. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ditto
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is done. Removed the comments.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM overall. Can we explicitly specify which APIs are supported or unsupported yet?
Signed-off-by: Chen Zhang <[email protected]>
The supported APIs are listed here #22554 . All other APIs are unverified. |
Signed-off-by: Chen Zhang <[email protected]>
Signed-off-by: Chen Zhang <[email protected]>
@WoosukKwon addressed your comments. |
Signed-off-by: Chen Zhang <[email protected]> Signed-off-by: Paul Pak <[email protected]>
Signed-off-by: Chen Zhang <[email protected]> Signed-off-by: Diego-Castan <[email protected]>
Signed-off-by: Chen Zhang <[email protected]>
Signed-off-by: Chen Zhang <[email protected]>
Signed-off-by: Chen Zhang <[email protected]> Signed-off-by: Xiao Yu <[email protected]>
Signed-off-by: Chen Zhang <[email protected]>
Essential Elements of an Effective PR Description Checklist
supported_models.md
andexamples
for a new model.Purpose。
Support streaming in response API. Note that only part of the APIs are supported.
Test Plan
All tests in
tests/v1/entrypoints/openai/responses
to test whether it breaks existing model. Harmony integration is not tested yet.Test Result
Passed
(Optional) Documentation Update
Originally authored by @simon-mo
Should be merged after #22427