Learn R Programming

LLMR (version 0.4.2)

llm_fn: Vectorised LLM transformer

Description

Vectorised LLM transformer

Usage

llm_fn(x, prompt, .config, .system_prompt = NULL, ...)

Value

A character vector the same length as x. Failed calls yield NA.

Arguments

x

A character vector **or** a data.frame/tibble.

prompt

A glue template string. *If* x is a data frame, use {col} placeholders; *if* x is a vector, refer to the element as {x}.

.config

An llm_config object.

.system_prompt

Optional system message (character scalar).

...

Passed unchanged to call_llm_broadcast (e.g.\ tries, progress, verbose).

Details

Runs each prompt through `call_llm_broadcast()`, which forwards the requests to `call_llm_par()`. That core engine executes them **in parallel** according to the current *future* plan. For instant multi-core use, call `setup_llm_parallel(workers = 4)` (or whatever number you prefer) once per session; revert with `reset_llm_parallel()`.

See Also

setup_llm_parallel, reset_llm_parallel, call_llm_par

Examples

Run this code
## --- Vector input ------------------------------------------------------
if (FALSE) {
cfg <- llm_config(
  provider = "openai",
  model    = "gpt-4.1-nano",
  api_key  =  Sys.getenv("OPENAI_API_KEY"),
  temperature = 0
)

words <- c("excellent", "awful", "average")

llm_fn(
  words,
  prompt   = "Classify sentiment of '{x}' as Positive, Negative, or Neutral.",
  .config  = cfg,
  .system_prompt = "Respond with ONE word only."
)

## --- Data-frame input inside a tidyverse pipeline ----------------------
library(dplyr)

reviews <- tibble::tibble(
  id     = 1:3,
  review = c("Great toaster!", "Burns bread.", "It's okay.")
)

reviews |>
  llm_mutate(
    sentiment,
    prompt  = "Classify the sentiment of this review: {review}",
    .config = cfg,
    .system_prompt = "Respond with Positive, Negative, or Neutral."
  )
}

Run the code above in your browser using DataLab