OllamaChat
OllamaChat
Class for an advanced Ollama chat session with extended configuration.
__init__(model='qwen2.5-coder', system_prompt=None, options=None, proxies=None)
Initialize an advanced Ollama chat session with extended configuration.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
model
|
str
|
The name of the Ollama model. |
'qwen2.5-coder'
|
system_prompt
|
str
|
Initial system message to set chat context. |
None
|
options
|
dict
|
Advanced model generation parameters. |
None
|
proxies
|
dict
|
Proxy configuration for the HTTP requests. |
None
|
add_message(content, role='user', **kwargs)
Add a message to the chat history with optional metadata.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
content
|
str
|
The message content |
required |
role
|
str
|
Message role (user/assistant/system) |
'user'
|
kwargs
|
dict
|
Additional message metadata |
{}
|
generate_response(stream=False, llm_options={})
Generate a response with advanced configuration options.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
stream
|
bool
|
Stream response in real-time |
False
|
llm_options
|
dict
|
Temporary generation options |
{}
|
Returns:
Type | Description |
---|---|
Union[str, Generator]
|
Response as string or streaming generator |
generate_simple_response(prompt, sys_prompt=None, stream=False, llm_options={})
Generate a simple response without historic.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
prompt
|
str
|
user prompt |
required |
sys_prompt
|
str
|
system prompt |
None
|
stream
|
bool
|
Stream response in real-time |
False
|
llm_options
|
dict
|
Temporary generation options |
{}
|
Returns:
Type | Description |
---|---|
Response as string or streaming generator |
print_Generator_and_return(response, number=1)
Prints the content of a response if it is a generator, or simply returns the response as is.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
response
|
Generator | str
|
The response to print or return. If it's a generator, it will be printed chunk by chunk. If it's a string, it will be returned directly. |
required |
number
|
int
|
The index of the response (default is 1). Used for logging purposes. |
1
|
Returns:
Type | Description |
---|---|
str
|
The original response if it is a string, or the concatenated string of all chunks |
str
|
if it was a generator. |