|
18 | 18 | @component
|
19 | 19 | class OpenAIGenerator:
|
20 | 20 | """
|
21 |
| - Text generation component using OpenAI's large language models (LLMs). |
| 21 | + Generates text using OpenAI's large language models (LLMs). |
22 | 22 |
|
23 |
| - Enables text generation using OpenAI's large language models (LLMs). It supports gpt-4 and gpt-3.5-turbo |
24 |
| - family of models. |
| 23 | + It works with the gpt-4 and gpt-3.5-turbo models and supports streaming responses |
| 24 | + from OpenAI API. It uses strings as input and output. |
25 | 25 |
|
26 |
| - Users can pass any text generation parameters valid for the `openai.ChatCompletion.create` method |
27 |
| - directly to this component via the `**generation_kwargs` parameter in __init__ or the `**generation_kwargs` |
28 |
| - parameter in `run` method. |
| 26 | + You can customize how the text is generated by passing parameters to the |
| 27 | + OpenAI API. Use the `**generation_kwargs` argument when you initialize |
| 28 | + the component or when you run it. Any parameter that works with |
| 29 | + `openai.ChatCompletion.create` will work here too. |
29 | 30 |
|
30 |
| - For more details on the parameters supported by the OpenAI API, refer to the OpenAI |
31 |
| - [documentation](https://platform.openai.com/docs/api-reference/chat). |
32 | 31 |
|
33 |
| - Key Features and Compatibility: |
34 |
| - - Primary Compatibility: Designed to work seamlessly with gpt-4, gpt-3.5-turbo family of models. |
35 |
| - - Streaming Support: Supports streaming responses from the OpenAI API. |
36 |
| - - Customizability: Supports all parameters supported by the OpenAI API. |
| 32 | + For details on OpenAI API parameters, see |
| 33 | + [OpenAI documentation](https://platform.openai.com/docs/api-reference/chat). |
37 | 34 |
|
38 |
| - Input and Output Format: |
39 |
| - - String Format: This component uses the strings for both input and output. |
| 35 | + ### Usage example |
40 | 36 |
|
41 | 37 | ```python
|
42 | 38 | from haystack.components.generators import OpenAIGenerator
|
@@ -65,12 +61,12 @@ def __init__(
|
65 | 61 | max_retries: Optional[int] = None,
|
66 | 62 | ):
|
67 | 63 | """
|
68 |
| - Creates an instance of OpenAIGenerator. Unless specified otherwise in the `model`, OpenAI's GPT-3.5 is used. |
| 64 | + Creates an instance of OpenAIGenerator. Unless specified otherwise in `model`, uses OpenAI's GPT-3.5. |
69 | 65 |
|
70 | 66 | By setting the 'OPENAI_TIMEOUT' and 'OPENAI_MAX_RETRIES' you can change the timeout and max_retries parameters
|
71 | 67 | in the OpenAI client.
|
72 | 68 |
|
73 |
| - :param api_key: The OpenAI API key. |
| 69 | + :param api_key: The OpenAI API key to connect to OpenAI. |
74 | 70 | :param model: The name of the model to use.
|
75 | 71 | :param streaming_callback: A callback function that is called when a new token is received from the stream.
|
76 | 72 | The callback function accepts StreamingChunk as an argument.
|
|
0 commit comments