Learn R Programming

aisdk (version 1.1.0)

stream_text: Stream Text

Description

Generate text using a language model with streaming output. This function provides a real-time stream of tokens through a callback.

Usage

stream_text(
  model = NULL,
  prompt,
  callback = NULL,
  system = NULL,
  temperature = 0.7,
  max_tokens = NULL,
  tools = NULL,
  max_steps = 1,
  sandbox = FALSE,
  skills = NULL,
  session = NULL,
  hooks = NULL,
  registry = NULL,
  ...
)

Value

A GenerateResult object (accumulated from the stream).

Arguments

model

Either a LanguageModelV1 object, or a string ID like "openai:gpt-4o".

prompt

A character string prompt, or a list of messages.

callback

A function called for each text chunk: callback(text, done).

system

Optional system prompt.

temperature

Sampling temperature (0-2). Default 0.7.

max_tokens

Maximum tokens to generate.

tools

Optional list of Tool objects for function calling.

max_steps

Maximum number of generation steps (tool execution loops). Default 1. Set to higher values (e.g., 5) to enable automatic tool execution.

sandbox

Logical. If TRUE, enables R-native programmatic sandbox mode. See generate_text for details. Default FALSE.

skills

Optional path to skills directory, or a SkillRegistry object.

session

Optional ChatSession object for shared state.

hooks

Optional HookHandler object.

registry

Optional ProviderRegistry to use.

...

Additional arguments passed to the model.

Examples

Run this code
# \donttest{
if (interactive()) {
  model <- create_openai()$language_model("gpt-4o")
  stream_text(model, "Tell me a story", callback = function(text, done) {
    if (!done) cat(text)
  })
}
# }

Run the code above in your browser using DataLab