Generate Text in Parallel for Multiple Prompts
generate_parallel(
context,
prompts,
max_tokens = 100L,
top_k = 40L,
top_p = 1,
temperature = 0,
repeat_last_n = 0L,
penalty_repeat = 1,
seed = 1234L,
progress = FALSE,
clean = FALSE,
hash = TRUE
)Character vector of generated texts
A context object created with context_create
Character vector of input text prompts
Maximum number of tokens to generate (default: 100)
Top-k sampling parameter (default: 40). Limits vocabulary to k most likely tokens
Top-p (nucleus) sampling parameter (default: 1.0). Cumulative probability threshold for token selection
Sampling temperature (default: 0.0). Set to 0 for greedy decoding. Higher values increase creativity
Number of recent tokens to consider for repetition penalty (default: 0). Set to 0 to disable
Repetition penalty strength (default: 1.0). Values >1 discourage repetition. Set to 1.0 to disable
Random seed for reproducible generation (default: 1234). Use positive integers for deterministic output
If TRUE, displays a console progress bar indicating batch
completion status while generations are running (default: FALSE).
If TRUE, remove common chat-template control tokens from each generated text (default: FALSE).
When `TRUE` (default), computes SHA-256 hashes for the supplied prompts and generated outputs. Hashes are attached via the `"hashes"` attribute for later inspection.