Learn R Programming

llmflow (version 3.0.2)

AutoFlow: AutoFlow - Automated R Analysis Workflow with LLM

Description

AutoFlow - Automated R Analysis Workflow with LLM

Usage

AutoFlow(
  react_llm,
  task_prompt,
  rag_llm = NULL,
  max_turns = 15,
  pkgs_to_use = c(),
  objects_to_use = list(),
  existing_session = NULL,
  verbose = TRUE,
  r_session_options = list(),
  context_window_size = 3000,
  max_observation_length = 800,
  error_escalation_threshold = 3
)

Value

ReAct result object

Arguments

react_llm

Chat object for ReAct task execution (required)

task_prompt

Task description (required)

rag_llm

Chat object for RAG documentation retrieval (default: NULL, uses react_llm)

max_turns

Maximum ReAct turns (default: 15)

pkgs_to_use

Packages to load in R session

objects_to_use

Named list of objects to load

existing_session

Existing callr R session

verbose

Verbose output (default: TRUE)

r_session_options

Options for callr R session

context_window_size

Context window size for history

max_observation_length

Maximum observation length

error_escalation_threshold

Error count threshold

Details

**Dual-LLM Architecture:**

AutoFlow supports using different models for different purposes: - `rag_llm`: Retrieval-Augmented Generation - retrieves relevant function documentation - `react_llm`: ReAct execution - performs reasoning and action loops

**Why separate models?** - RAG tasks are simple (extract function names) - use fast/cheap models - ReAct tasks are complex (coding, reasoning) - use powerful models - Cost savings: ~70

If `rag_llm` is NULL, both operations use `react_llm`.

Examples

Run this code
if (FALSE) {
# Simple: same model for both
llm <- llm_openai(model = "gpt-4o")
result <- AutoFlow(llm, "Load mtcars and plot mpg vs hp")

# Optimized: lightweight RAG, powerful ReAct
rag <- llm_openai(model = "gpt-3.5-turbo") # Fast & cheap
react <- llm_openai(model = "gpt-4o") # Powerful
result <- AutoFlow(
  react_llm = react,
  task_prompt = "Perform PCA on iris dataset",
  rag_llm = rag
)

# Cross-provider: DeepSeek RAG + Claude ReAct
rag <- chat_deepseek(model = "deepseek-chat")
react <- chat_anthropic(model = "claude-sonnet-4-20250514")
result <- AutoFlow(react, "Complex analysis", rag_llm = rag)

# Batch evaluation with shared RAG
rag <- chat_deepseek(model = "deepseek-chat")
react <- chat_openai(model = "gpt-4o")

for (task in tasks) {
  result <- AutoFlow(react, task, rag_llm = rag, verbose = FALSE)
}
}

Run the code above in your browser using DataLab