Learn R Programming

LLMR (version 0.4.2)

call_llm_broadcast: Mode 2: Message Broadcast - Fixed Config, Multiple Messages

Description

Broadcasts different messages using the same configuration in parallel. Perfect for batch processing different prompts with consistent settings. This function requires setting up the parallel environment using `setup_llm_parallel`.

Usage

call_llm_broadcast(config, messages_list, ...)

Value

A tibble with columns: message_index (metadata), provider, model, all model parameters, response_text, raw_response_json, success, error_message.

Arguments

config

Single llm_config object to use for all calls.

messages_list

A list of message lists, each for one API call.

...

Additional arguments passed to `call_llm_par` (e.g., tries, verbose, progress).

Examples

Run this code
if (FALSE) {
  # Broadcast different questions
  config <- llm_config(provider = "openai", model = "gpt-4o-mini",
                       api_key = Sys.getenv("OPENAI_API_KEY"))

  messages_list <- list(
    list(list(role = "user", content = "What is 2+2?")),
    list(list(role = "user", content = "What is 3*5?")),
    list(list(role = "user", content = "What is 10/2?"))
  )

  setup_llm_parallel(workers = 4, verbose = TRUE)
  results <- call_llm_broadcast(config, messages_list)
  reset_llm_parallel(verbose = TRUE)
}

Run the code above in your browser using DataLab