Call the OpenAI API to interact with ChatGPT or o-reasoning models
chatgpt(
.llm,
.model = "gpt-4",
.max_tokens = 1024,
.temperature = NULL,
.top_p = NULL,
.top_k = NULL,
.frequency_penalty = NULL,
.presence_penalty = NULL,
.api_url = "https://api.openai.com/",
.timeout = 60,
.verbose = FALSE,
.wait = TRUE,
.min_tokens_reset = 0L,
.stream = FALSE
)
Returns an updated LLMMessage object.
An existing LLMMessage object or an initial text prompt.
The model identifier (default: "gpt-4o").
The maximum number of tokens to generate (default: 1024).
Control for randomness in response generation (optional).
Nucleus sampling parameter (optional).
Top k sampling parameter (optional).
Controls repetition frequency (optional).
Controls how much to penalize repeating content (optional)
Base URL for the API (default: https://api.openai.com/v1/completions).
Request timeout in seconds (default: 60).
Should additional information be shown after the API call
Should we wait for rate limits if necessary?
How many tokens should be remaining to wait until we wait for token reset?
Stream back the response piece by piece (default: FALSE).