This function creates an OpenAI-compatible LLM provider with comprehensive error handling and testing capabilities. It automatically handles max_tokens limits by falling back to the model's maximum when exceeded.
llm_provider(
base_url = "https://api.openai.com/v1/chat/completions",
api_key = NULL,
model = "gpt-4o-mini",
temperature = 0.2,
max_tokens = 5000,
timeout = 100,
stream = FALSE,
verbose = TRUE,
skip_test = FALSE,
test_mode = c("full", "http_only", "skip"),
...
)A configured LLM provider object
The base URL for the OpenAI-compatible API
The API key for authentication. If NULL, will use LLM_API_KEY env var
The model name to use
The temperature parameter for response randomness
Maximum number of tokens in response (will auto-adjust if exceeds model limit)
Request timeout in seconds
Whether to use streaming responses
Whether to show verbose output
Whether to skip the availability test (useful for problematic providers)
The testing mode: "full", "http_only", "skip"
Additional parameters to pass to the model
Zaoqu Liu; Email: liuzaoqu@163.com