Learn R Programming

LLMR (version 0.6.0)

llmr_response: LLMR Response Object

Description

A lightweight S3 container for generative model calls. It standardizes finish reasons and token usage across providers and keeps the raw response for advanced users.

Returns the standardized finish reason for an llmr_response.

Returns a list with token counts for an llmr_response.

Convenience check for truncation due to token limits.

Usage

finish_reason(x)

tokens(x)

is_truncated(x)

# S3 method for llmr_response as.character(x, ...)

# S3 method for llmr_response print(x, ...)

Value

A length-1 character vector or NA_character_.

A list list(sent, rec, total, reasoning). Missing values are NA.

TRUE if truncated, otherwise FALSE.

Arguments

x

An llmr_response object.

...

Ignored.

Details

Fields

  • text: character scalar. Assistant reply.

  • provider: character. Provider id (e.g., "openai", "gemini").

  • model: character. Model id.

  • finish_reason: one of "stop", "length", "filter", "tool", "other".

  • usage: list with integers sent, rec, total, reasoning (if available).

  • response_id: provider’s response identifier if present.

  • duration_s: numeric seconds from request to parse.

  • raw: parsed provider JSON (list).

  • raw_json: raw JSON string.

Printing

print() shows the text, then a compact status line with model, finish reason, token counts, and a terse hint if truncated or filtered.

Coercion

as.character() extracts text so the object remains drop-in for code that expects a character return.

See Also

call_llm(), call_llm_robust(), llm_chat_session(), llm_config(), llm_mutate(), llm_fn()

Examples

Run this code
# Minimal fabricated example (no network):
r <- structure(
  list(
    text = "Hello!",
    provider = "openai",
    model = "demo",
    finish_reason = "stop",
    usage = list(sent = 12L, rec = 5L, total = 17L, reasoning = NA_integer_),
    response_id = "resp_123",
    duration_s = 0.012,
    raw = list(choices = list(list(message = list(content = "Hello!")))),
    raw_json = "{}"
  ),
  class = "llmr_response"
)
as.character(r)
finish_reason(r)
tokens(r)
print(r)
if (FALSE) {
fr <- finish_reason(r)
}
if (FALSE) {
u <- tokens(r)
u$total
}
if (FALSE) {
if (is_truncated(r)) message("Increase max_tokens")
}

Run the code above in your browser using DataLab