Provides a wrapper for the openai()
function to facilitate
migration from the deprecated chatgpt()
function. This ensures backward
compatibility while allowing users to transition to the updated features.
chatgpt(
.llm,
.model = "gpt-4o",
.max_tokens = 1024,
.temperature = NULL,
.top_p = NULL,
.top_k = NULL,
.frequency_penalty = NULL,
.presence_penalty = NULL,
.api_url = "https://api.openai.com/",
.timeout = 60,
.verbose = FALSE,
.json = FALSE,
.stream = FALSE,
.dry_run = FALSE
)
An LLMMessage
object with the assistant's reply.
An LLMMessage
(passed directly to the openai()
function)
A character string specifying the model to use.
An integer specifying the maximum number of tokens (mapped to .max_completion_tokens
in openai()
)
A numeric value for controlling randomness. This is
A numeric value for nucleus sampling, indicating the top
Currently unused, as it is not supported by openai()
.
A numeric value that penalizes new tokens based on their frequency so far.
A numeric value that penalizes new tokens based on whether they appear in the text so far.
Character string specifying the API URL. Defaults to the OpenAI API endpoint.
An integer specifying the request timeout in seconds. This is
Will print additional information about the request (default: false)
Should json-mode be used? (detault: false)
Should the response be processed as a stream (default: false)
Should the request is constructed but not actually sent. Useful for debugging and testing. (default: false)
This function is deprecated and is now a wrapper around openai()
. It is
recommended to switch to using openai()
directly in future code. The
chatgpt()
function remains available to ensure backward compatibility for
existing projects.
Use openai()
instead.
if (FALSE) {
# Using the deprecated chatgpt() function
result <- chatgpt(.llm = llm_message(), .prompt = "Hello, how are you?")
}
Run the code above in your browser using DataLab