Learn R Programming

deliberr (version 0.1.0)

get_llm_response: Get LLM Response from OpenRouter.ai

Description

get_llm_response sends a prompt to a specified large language model (LLM) through the OpenRouter.ai API and returns the text response. It can also manage conversation history

Usage

get_llm_response(
  user_prompt,
  model_id = "x-ai/grok-3-mini",
  system_prompt = NA_character_,
  context = NULL,
  temperature = 0,
  api_key = Sys.getenv("OPENROUTER_API_KEY")
)

Value

A list containing three elements: response, context, and cost

cost is a list containing prompt_cost, completion_cost, and total_cost in USD

Arguments

user_prompt

a string containing the prompt or question for the model

model_id

a string specifying the model to use (e.g., "google/gemini-flash-1.5"). You can find model names on the OpenRouter.ai website

system_prompt

A string defining the role or behavior of the model. This is only used for the first message in a conversation (when 'context' is NULL)

context

a list representing the conversation history. If provided, the 'system_prompt' is ignored, as the context is assumed to contain the full history. Defaults to NULL for a new conversation

temperature

a numeric value between 0 and 2 that controls the randomness of the model's output. Higher values mean more "creative" responses

api_key

a string containing your OpenRouter.ai API key. It is strongly recommended to use the default, which retrieves the key from an environment variable named OPENROUTER_API_KEY

Examples

Run this code
if (FALSE) {
# Make sure to set your API key first
# Sys.setenv(OPENROUTER_API_KEY = "your_api_key_here")

# First turn of the conversation
first_turn <- get_llm_response(
  user_prompt = "What are the three main benefits of using R for data analysis?",
  model_id = "x-ai/grok-3-mini",
  system_prompt = "You are a helpful assistant who provides concise answers."
)
cat("--- Initial Response ---\n")
cat(first_turn$response)
cat(paste0("\n--- Total Cost: $",
format(first_turn$cost$total_cost, scientific = FALSE), " ---\n"))

# Follow-up question using the context from the first turn
second_turn <- get_llm_response(
  user_prompt = "Can you elaborate on the second benefit you mentioned?",
  model_id = "x-ai/grok-3-mini",
  context = first_turn$context
)
cat("\n\n--- Follow-up Response ---\n")
cat(second_turn$response)
cat(paste0("\n--- Total Cost: $", format(second_turn$cost$total_cost,
scientific = FALSE), " ---\n"))
}

Run the code above in your browser using DataLab