Skip to content

Commit a75dc89

Browse files
authored
update docstrings (#8158)
1 parent 0cda96e commit a75dc89

File tree

1 file changed

+20
-22
lines changed

1 file changed

+20
-22
lines changed

haystack/components/generators/hugging_face_local.py

Lines changed: 20 additions & 22 deletions
Original file line numberDiff line numberDiff line change
@@ -33,11 +33,12 @@
3333
@component
3434
class HuggingFaceLocalGenerator:
3535
"""
36-
Generator based on a Hugging Face model.
36+
Generates text using models from Hugging Face that run locally.
3737
38-
This component provides an interface to generate text using a Hugging Face model that runs locally.
38+
LLMs running locally may need powerful hardware.
39+
40+
### Usage example
3941
40-
Usage example:
4142
```python
4243
from haystack.components.generators import HuggingFaceLocalGenerator
4344
@@ -67,35 +68,32 @@ def __init__(
6768
"""
6869
Creates an instance of a HuggingFaceLocalGenerator.
6970
70-
:param model: The name or path of a Hugging Face model for text generation,
71-
:param task: The task for the Hugging Face pipeline.
72-
Possible values are "text-generation" and "text2text-generation".
73-
Generally, decoder-only models like GPT support "text-generation",
74-
while encoder-decoder models like T5 support "text2text-generation".
75-
If the task is also specified in the `huggingface_pipeline_kwargs`, this parameter will be ignored.
76-
If not specified, the component will attempt to infer the task from the model name,
77-
calling the Hugging Face Hub API.
78-
:param device: The device on which the model is loaded. If `None`, the default device is automatically
79-
selected. If a device/device map is specified in `huggingface_pipeline_kwargs`, it overrides this parameter.
71+
:param model: The Hugging Face text generation model name or path.
72+
:param task: The task for the Hugging Face pipeline. Possible options:
73+
- `text-generation`: Supported by decoder models, like GPT.
74+
- `text2text-generation`: Supported by encoder-decoder models, like T5.
75+
If the task is specified in `huggingface_pipeline_kwargs`, this parameter is ignored.
76+
If not specified, the component calls the Hugging Face API to infer the task from the model name.
77+
:param device: The device for loading the model. If `None`, automatically selects the default device.
78+
If a device or device map is specified in `huggingface_pipeline_kwargs`, it overrides this parameter.
8079
:param token: The token to use as HTTP bearer authorization for remote files.
81-
If the token is also specified in the `huggingface_pipeline_kwargs`, this parameter will be ignored.
82-
:param generation_kwargs: A dictionary containing keyword arguments to customize text generation.
83-
Some examples: `max_length`, `max_new_tokens`, `temperature`, `top_k`, `top_p`,...
80+
If the token is specified in `huggingface_pipeline_kwargs`, this parameter is ignored.
81+
:param generation_kwargs: A dictionary with keyword arguments to customize text generation.
82+
Some examples: `max_length`, `max_new_tokens`, `temperature`, `top_k`, `top_p`.
8483
See Hugging Face's documentation for more information:
8584
- [customize-text-generation](https://huggingface.co/docs/transformers/main/en/generation_strategies#customize-text-generation)
8685
- [transformers.GenerationConfig](https://huggingface.co/docs/transformers/main/en/main_classes/text_generation#transformers.GenerationConfig)
87-
:param huggingface_pipeline_kwargs: Dictionary containing keyword arguments used to initialize the
86+
:param huggingface_pipeline_kwargs: Dictionary with keyword arguments to initialize the
8887
Hugging Face pipeline for text generation.
8988
These keyword arguments provide fine-grained control over the Hugging Face pipeline.
9089
In case of duplication, these kwargs override `model`, `task`, `device`, and `token` init parameters.
91-
See Hugging Face's [documentation](https://huggingface.co/docs/transformers/en/main_classes/pipelines#transformers.pipeline.task)
92-
for more information on the available kwargs.
90+
For available kwargs, see [Hugging Face documentation](https://huggingface.co/docs/transformers/en/main_classes/pipelines#transformers.pipeline.task).
9391
In this dictionary, you can also include `model_kwargs` to specify the kwargs for model initialization:
9492
[transformers.PreTrainedModel.from_pretrained](https://huggingface.co/docs/transformers/en/main_classes/model#transformers.PreTrainedModel.from_pretrained)
95-
:param stop_words: A list of stop words. If any one of the stop words is generated, the generation is stopped.
96-
If you provide this parameter, you should not specify the `stopping_criteria` in `generation_kwargs`.
93+
:param stop_words: If the model generates a stop word, the generation stops.
94+
If you provide this parameter, don't specify the `stopping_criteria` in `generation_kwargs`.
9795
For some chat models, the output includes both the new text and the original prompt.
98-
In these cases, it's important to make sure your prompt has no stop words.
96+
In these cases, make sure your prompt has no stop words.
9997
:param streaming_callback: An optional callable for handling streaming responses.
10098
"""
10199
transformers_import.check()

0 commit comments

Comments
 (0)