Learn R Programming

edgemodelr (version 0.1.6)

edge_stream_completion: Stream text completion with real-time token generation

Description

Stream text completion with real-time token generation

Usage

edge_stream_completion(ctx, prompt, callback, n_predict = 128L, temperature = 0.8, 
                       top_p = 0.95)

Value

List with full response and generation statistics

Arguments

ctx

Model context from edge_load_model()

prompt

Input text prompt

callback

Function called for each generated token. Receives list with token info.

n_predict

Maximum tokens to generate (default: 128)

temperature

Sampling temperature (default: 0.8)

top_p

Top-p sampling parameter (default: 0.95)

Examples

Run this code
if (FALSE) {
# Requires a downloaded model (not run in checks)
model_path <- "model.gguf"
if (file.exists(model_path)) {
  ctx <- edge_load_model(model_path)

  # Basic streaming with token display
  result <- edge_stream_completion(ctx, "Hello, how are you?",
    callback = function(data) {
      if (!data$is_final) {
        cat(data$token)
        flush.console()
      } else {
        cat("\n[Done: ", data$total_tokens, " tokens]\n")
      }
      return(TRUE)  # Continue generation
    })

  edge_free_model(ctx)
}
}

Run the code above in your browser using DataLab