- provider
Character scalar. One of:
"openai"
, "anthropic"
, "gemini"
, "groq"
, "together"
,
"voyage"
(embeddings only), "deepseek"
, "xai"
.
- model
Character scalar. Model name understood by the chosen provider.
(e.g., "gpt-4o-mini"
, "o4-mini"
, "claude-3.7"
, "gemini-2.0-flash"
, etc.)
- api_key
Character scalar. Provider API key.
- troubleshooting
Logical. If TRUE
, prints the full request payloads
(including your API key!) for debugging. Use with extreme caution.
- base_url
Optional character. Back-compat alias; if supplied it is
stored as api_url
in model_params
and overrides the default endpoint.
- embedding
NULL
(default), TRUE
, or FALSE
. If TRUE
, the call
is routed to the provider's embeddings API; if FALSE
, to the chat API.
If NULL
, LLMR infers embeddings when model
contains "embedding"
.
- no_change
Logical. If TRUE
, LLMR never auto-renames/adjusts
provider parameters. If FALSE
(default), well-known compatibility shims
may apply (e.g., renaming OpenAI's max_tokens
→ max_completion_tokens
after a server hint; see call_llm()
notes).
- ...
Additional provider-specific parameters (e.g., temperature
,
top_p
, max_tokens
, top_k
, repetition_penalty
, reasoning_effort
,
api_url
, etc.). Values are forwarded verbatim unless documented shims apply.