Default arguments to use when making requests to the LLM
chattr_defaults(
type = "default",
prompt = NULL,
max_data_files = NULL,
max_data_frames = NULL,
include_doc_contents = NULL,
include_history = NULL,
provider = NULL,
path = NULL,
model = NULL,
model_arguments = NULL,
system_msg = NULL,
yaml_file = "chattr.yml",
force = FALSE,
label = NULL,
...
)An 'ch_model' object that contains the current defaults that will be used to communicate with the LLM.
Entry point to interact with the model. Accepted values: 'notebook', chat'
Request to send to LLM. Defaults to NULL
Sets the maximum number of data files to send to the model. It defaults to 20. To send all, set to NULL
Sets the maximum number of data frames loaded in the current R session to send to the model. It defaults to 20. To send all, set to NULL
Send the current code in the document
Indicates whether to include the chat history when every time a new prompt is submitted
The name of the provider of the LLM. Today, only "openai" is is available
The location of the model. It could be an URL or a file path.
The name or path to the model to use.
Additional arguments to pass to the model as part of the request, it requires a list. Examples of arguments: temperature, top_p, max_tokens
For OpenAI GPT 3.5 or above, the system message to send as part of the request
The path to a valid config YAML file that contains the
defaults to use in a session
Re-process the base and any work space level file defaults
Label to display in the Shiny app, and other locations
Additional model arguments that are not standard for all models/backends
The idea is that because we will use addin shortcut to execute the
request, all of the other arguments can be controlled via this function. By
default, it will try to load defaults from a config YAML file, if none are
found, then the defaults for GPT 3.5 will be used. The defaults can be
modified by calling this function, even after the interactive session has
started.