Method to integrate to new LLM API's
ch_submit(
defaults,
prompt = NULL,
stream = NULL,
prompt_build = TRUE,
preview = FALSE,
...
)The output from the model currently in use.
Defaults object, generally puled from chattr_defaults()
The prompt to send to the LLM
To output the response from the LLM as it happens, or wait until the response is complete. Defaults to TRUE.
Include the context and additional prompt as part of the request
Primarily used for debugging. It indicates if it should send the prompt to the LLM (FALSE), or if it should print out the resulting prompt (TRUE)
Optional arguments; currently unused.