powered by
Generate text using a language model with streaming output. This function provides a real-time stream of tokens through a callback.
stream_text( model = NULL, prompt, callback = NULL, system = NULL, temperature = 0.7, max_tokens = NULL, tools = NULL, max_steps = 1, sandbox = FALSE, skills = NULL, session = NULL, hooks = NULL, registry = NULL, ... )
A GenerateResult object (accumulated from the stream).
Either a LanguageModelV1 object, or a string ID like "openai:gpt-4o".
A character string prompt, or a list of messages.
A function called for each text chunk: callback(text, done).
callback(text, done)
Optional system prompt.
Sampling temperature (0-2). Default 0.7.
Maximum tokens to generate.
Optional list of Tool objects for function calling.
Maximum number of generation steps (tool execution loops). Default 1. Set to higher values (e.g., 5) to enable automatic tool execution.
Logical. If TRUE, enables R-native programmatic sandbox mode. See generate_text for details. Default FALSE.
generate_text
Optional path to skills directory, or a SkillRegistry object.
Optional ChatSession object for shared state.
Optional HookHandler object.
Optional ProviderRegistry to use.
Additional arguments passed to the model.
# \donttest{ if (interactive()) { model <- create_openai()$language_model("gpt-4o") stream_text(model, "Tell me a story", callback = function(text, done) { if (!done) cat(text) }) } # }
Run the code above in your browser using DataLab