Shiny module server for AI-powered chat, featuring non-blocking streaming via background processes and tool execution bridge.
aiChatServer(
id,
model,
tools = NULL,
context = NULL,
system = NULL,
debug = FALSE,
on_message_complete = NULL
)A reactive value containing the chat history.
The namespace ID for the module.
Either a LanguageModelV1 object, or a string ID like "openai:gpt-4o".
Optional list of Tool objects for function calling.
Optional reactive expression that returns context data to inject
into the system prompt. This is read with isolate() to avoid reactive loops.
Optional system prompt.
Reactive expression or logical. If TRUE, shows raw debug output in UI.
Optional callback function called when a message is complete. Takes one argument: the complete assistant message text.