Learn R Programming

LLMR (version 0.4.2)

call_llm: Call LLM API

Description

Sends a message to the specified LLM API and retrieves the response.

Usage

call_llm(config, messages, verbose = FALSE, json = FALSE)

Value

The generated text response or embedding results. If `json=TRUE`, attributes `raw_json` and `full_response` are attached.

Arguments

config

An `llm_config` object created by `llm_config()`.

messages

A list of message objects (or a character vector for embeddings). For multimodal requests, the `content` of a message can be a list of parts, e.g., `list(list(type="text", text="..."), list(type="file", path="..."))`.

verbose

Logical. If `TRUE`, prints the full API response.

json

Logical. If `TRUE`, the returned text will have the raw JSON response and the parsed list as attributes.

Examples

Run this code
if (FALSE) {
  # Standard text call
  config <- llm_config(provider = "openai", model = "gpt-4o-mini", api_key = "...")
  messages <- list(list(role = "user", content = "Hello!"))
  response <- call_llm(config, messages)

  # Multimodal call (for supported providers like Gemini, Claude 3, GPT-4o)
  # Make sure to use a vision-capable model in your config
  multimodal_config <- llm_config(provider = "openai", model = "gpt-4o", api_key = "...")
  multimodal_messages <- list(list(role = "user", content = list(
    list(type = "text", text = "What is in this image?"),
    list(type = "file", path = "path/to/your/image.png")
  )))
  image_response <- call_llm(multimodal_config, multimodal_messages)
}

Run the code above in your browser using DataLab