- x
Character or list: Values to iterate over. Each element forms the user prompt for one
call to the LLM.
- model_or_llm
Character or LLM: Either the name of a model (a string) or a pre-built
LLM object (for example from create_Ollama, create_OpenAI, or create_Anthropic).
- backend
Character {"ollama", "openai", "anthropic"}: Backend to use when model_or_llm
is a string. Ignored when model_or_llm is an LLM object.
- system_prompt
Character: System prompt to use when building the LLM from a model name.
Ignored when model_or_llm is an LLM object.
- output_schema
Optional Schema: Output schema to enforce, created with schema. When
model_or_llm is a string, this is baked into the built LLM. When model_or_llm is a
pre-built LLM, supplying this here is a conflict and will error.
- verbosity
Integer [0, Inf): Verbosity level. The per-call verbosity is verbosity - 1L.
- extract_responses
Logical: If TRUE, return a character vector of assistant responses
(with NA_character_ for missing assistant content). If FALSE, return the raw list of
Message objects from each call.
- ...
Additional per-call arguments forwarded to generate (e.g. temperature, top_p,
max_tokens, stop, think, top_k, seed).