@@ -125,79 +125,52 @@ def gemini_output_img_handler(response: any):
125125class LiteLLM :
126126 """
127127 A comprehensive wrapper for LiteLLM that provides a unified interface for interacting
128- with 100+ Large Language Model providers.
129-
130- This class supports multiple model providers including OpenAI, Anthropic, Google, Azure,
131- Ollama, Cohere, DeepSeek, Groq, and many others through the LiteLLM library.
132-
133- Features:
134- - Text generation with customizable parameters
135- - Vision capabilities (image understanding) - supports file paths, URLs, base64
136- - Audio processing
137- - Tool/function calling
138- - Reasoning model support (o1, o3, Claude with thinking)
139- - Streaming responses
140- - Batch processing
141- - Automatic error handling and retries
142- - Provider-agnostic interface
128+ with various Large Language Models (LLMs).
129+
130+ This class supports multiple model providers including OpenAI, Anthropic, Google,
131+ and many others through the LiteLLM library. It provides features such as:
132+
133+ - Text generation with customizable parameters
134+ - Vision capabilities (image understanding)
135+ - Audio processing
136+ - Tool/function calling
137+ - Reasoning model support
138+ - Streaming responses
139+ - Batch processing
140+ - Automatic error handling and retries
143141
144142 The class intelligently handles different model requirements, automatically converting
145143 images to appropriate formats, managing message history, and providing detailed
146144 error messages for troubleshooting.
147145
148146 Attributes:
149- model_name (str): The name of the model to use. Supports any LiteLLM provider format:
150- - OpenAI: "gpt-4o", "gpt-4o-mini"
151- - Anthropic: "claude-3-5-sonnet-20241022"
152- - Google: "gemini/gemini-pro"
153- - Azure: "azure/gpt-4"
154- - Ollama: "ollama/llama2"
155- - And 100+ more providers
147+ model_name (str): The name of the model to use.
156148 system_prompt (str): The system prompt for the conversation.
157149 stream (bool): Whether to stream responses.
158- temperature (float): Sampling temperature for generation (0.0-2.0) .
150+ temperature (float): Sampling temperature for generation.
159151 max_tokens (int): Maximum number of tokens to generate.
160152 messages (list): Conversation message history.
161153 modalities (list): Supported input modalities.
162154
163155 Example:
164156 Basic usage:
165157 ```python
166- from swarms.utils.litellm_wrapper import LiteLLM
167-
168- llm = LiteLLM(model_name="gpt-4o", temperature=0.7)
158+ llm = LiteLLM(model_name="gpt-4", temperature=0.7)
169159 response = llm.run("What is the capital of France?")
170160 ```
171161
172162 With vision:
173163 ```python
174- llm = LiteLLM(model_name="gpt-4o ")
164+ llm = LiteLLM(model_name="gpt-4-vision-preview ")
175165 response = llm.run("Describe this image", img="path/to/image.jpg")
176166 ```
177167
178168 With tools:
179169 ```python
180170 tools = [{"type": "function", "function": {...}}]
181- llm = LiteLLM(model_name="gpt-4o ", tools_list_dictionary=tools)
171+ llm = LiteLLM(model_name="gpt-4 ", tools_list_dictionary=tools)
182172 response = llm.run("Use the weather tool to get today's weather")
183173 ```
184-
185- With streaming:
186- ```python
187- llm = LiteLLM(model_name="gpt-4o", stream=True)
188- for chunk in llm.run("Tell me a story"):
189- print(chunk, end="", flush=True)
190- ```
191-
192- With reasoning model:
193- ```python
194- llm = LiteLLM(model_name="openai/o1-preview", reasoning_enabled=True)
195- response = llm.run("Solve this complex math problem...")
196- ```
197-
198- See Also:
199- - LiteLLM Documentation: https://docs.litellm.ai/
200- - Swarms LiteLLM Guide: docs/swarms/examples/litellm.md
201174 """
202175
203176 def __init__ (
0 commit comments