Learn R Programming

llmhelper (version 1.0.0)

llm_ollama: Create Ollama LLM provider with enhanced availability check and auto-download

Description

This function creates an Ollama LLM provider with better error handling and follows tidyprompt best practices.

Usage

llm_ollama(
  base_url = "http://localhost:11434/api/chat",
  model = "qwen2.5:1.5b-instruct",
  temperature = 0.2,
  max_tokens = 5000,
  timeout = 100,
  stream = TRUE,
  verbose = TRUE,
  skip_test = FALSE,
  auto_download = TRUE,
  ...
)

Value

A configured LLM provider object

Arguments

base_url

The base URL for the Ollama API

model

The model name to use

temperature

The temperature parameter for response randomness

max_tokens

Maximum number of tokens in response

timeout

Request timeout in seconds

stream

Whether to use streaming responses

verbose

Whether to show verbose output

skip_test

Whether to skip the availability test

auto_download

Whether to automatically download missing models

...

Additional parameters to pass to the model

Author

Zaoqu Liu; Email: liuzaoqu@163.com