This function creates an Ollama LLM provider with better error handling and follows tidyprompt best practices.
llm_ollama(
base_url = "http://localhost:11434/api/chat",
model = "qwen2.5:1.5b-instruct",
temperature = 0.2,
max_tokens = 5000,
timeout = 100,
stream = TRUE,
verbose = TRUE,
skip_test = FALSE,
auto_download = TRUE,
...
)A configured LLM provider object
The base URL for the Ollama API
The model name to use
The temperature parameter for response randomness
Maximum number of tokens in response
Request timeout in seconds
Whether to use streaming responses
Whether to show verbose output
Whether to skip the availability test
Whether to automatically download missing models
Additional parameters to pass to the model
Zaoqu Liu; Email: liuzaoqu@163.com