Call the Groq API to interact with fast opensource models on Groq
groq(
.llm,
.model = "llama-3.2-90b-text-preview",
.max_tokens = 1024,
.temperature = NULL,
.top_p = NULL,
.frequency_penalty = NULL,
.presence_penalty = NULL,
.api_url = "https://api.groq.com/",
.timeout = 60,
.verbose = FALSE,
.wait = TRUE,
.min_tokens_reset = 0L
)
Returns an updated LLMMessage object.
An existing LLMMessage object or an initial text prompt.
The model identifier (default: "llama-3.2-90b-text-preview").
The maximum number of tokens to generate (default: 1024).
Control for randomness in response generation (optional).
Nucleus sampling parameter (optional).
Controls repetition frequency (optional).
Controls how much to penalize repeating content (optional)
Base URL for the API (default: "https://api.anthropic.com/v1/messages").
Request timeout in seconds (default: 60).
Should additional information be shown after the API call
Should we wait for rate limits if necessary?
How many tokens should be remaining to wait until we wait for token reset?