Simple chat_request wrapper - send text to chat and get response.
feedback(question, model = "gpt-3.5-turbo", max_tokens = NULL, print = TRUE)
string, chat answer
string, question text
string, ID of the model to use. See the model endpoint compatibility table https://platform.openai.com/docs/models/model-endpoint-compatibility for details on which models work with the Chat API.
NULL/int, the maximum number of tokens to generate in the chat completion
flag, If TRUE, print the answer on the console