Learn R Programming

llmhelper (version 1.0.0)

llm_provider: Create OpenAI-compatible LLM provider with enhanced error handling

Description

This function creates an OpenAI-compatible LLM provider with comprehensive error handling and testing capabilities. It automatically handles max_tokens limits by falling back to the model's maximum when exceeded.

Usage

llm_provider(
  base_url = "https://api.openai.com/v1/chat/completions",
  api_key = NULL,
  model = "gpt-4o-mini",
  temperature = 0.2,
  max_tokens = 5000,
  timeout = 100,
  stream = FALSE,
  verbose = TRUE,
  skip_test = FALSE,
  test_mode = c("full", "http_only", "skip"),
  ...
)

Value

A configured LLM provider object

Arguments

base_url

The base URL for the OpenAI-compatible API

api_key

The API key for authentication. If NULL, will use LLM_API_KEY env var

model

The model name to use

temperature

The temperature parameter for response randomness

max_tokens

Maximum number of tokens in response (will auto-adjust if exceeds model limit)

timeout

Request timeout in seconds

stream

Whether to use streaming responses

verbose

Whether to show verbose output

skip_test

Whether to skip the availability test (useful for problematic providers)

test_mode

The testing mode: "full", "http_only", "skip"

...

Additional parameters to pass to the model

Author

Zaoqu Liu; Email: liuzaoqu@163.com