- x
An object of class LLM or Agent.
- prompt
Character: The prompt to pass to the model or agent.
- temperature
Optional numeric [0, 2]: Per-call sampling temperature.
- top_p
Optional numeric [0, 1]: Nucleus sampling cutoff.
- max_tokens
Optional integer [1, Inf): Maximum tokens to generate. For Anthropic,
this overrides the config-level value (which is required); for Ollama this maps to
options.num_predict; for OpenAI-compatible backends this maps to max_tokens.
- stop
Optional character: Stop sequence(s). Mapped to stop_sequences on Anthropic
and options.stop on Ollama.
- think
Optional logical or character: Whether to enable model thinking
(reasoning trace) for this call. Character values target gpt-oss-style local models.
- output_schema
Optional Schema: Output schema to enforce on this call's response.
If omitted, the object's default schema (if any) is used.
- verbosity
Integer: Verbosity level.
- ...
Additional backend-specific per-call arguments. See Details.