Learn R Programming

chatAI4R (version 0.3.6)

chat4Rv2: chat4Rv2: Interact with gpt-4o-mini (default) using OpenAI API

Description

This function uses the OpenAI API to interact with the gpt-4o-mini model (default) and generates responses based on user input. In this function, currently, "gpt-4o-mini", "gpt-4o", "gpt-4", "gpt-4-turbo" and "gpt-3.5-turbo" can be selected as OpenAI's LLM model.

Usage

chat4Rv2(
  content,
  Model = "gpt-4o-mini",
  temperature = 1,
  max_tokens = 50,
  simple = TRUE,
  fromJSON_parsed = FALSE,
  system_prompt = "",
  api_key = Sys.getenv("OPENAI_API_KEY")
)

Value

A data frame containing the response from the GPT model.

Arguments

content

A string containing the user's input message.

Model

A string specifying the GPT model to use (default: "gpt-4o-mini").

temperature

A numeric value controlling the randomness of the model's output (default: 1).

max_tokens

A numeric value specifying the maximum number of tokens to generate (default is 50).

simple

Logical, if TRUE, only the content of the model's message will be returned.

fromJSON_parsed

Logical, if TRUE, content will be parsed from JSON.

system_prompt

A string containing the system message to set the context. If provided, it will be added as the first message in the conversation. Default is an empty string.

api_key

A string containing the user's OpenAI API key. Defaults to the value of the environment variable "OPENAI_API_KEY".

Author

Satoshi Kume

Details

chat4Rv2 Function

Examples

Run this code
if (FALSE) {
Sys.setenv(OPENAI_API_KEY = "Your API key")
# Using chat4Rv2 without system_prompt (default behavior)
response <- chat4Rv2(content = "What is the capital of France?")
response

# Using chat4Rv2 with a system_prompt provided
response <- chat4Rv2(content = "What is the capital of France?",
                     system_prompt = "You are a helpful assistant.")
response
}

Run the code above in your browser using DataLab