Learn R Programming

LLMAgentR

Overview

LLMAgentR is an R package for building Language Model Agents using a modular state graph execution framework. Inspired by LangGraph and LangChain architectures, it supports iterative workflows for research, data analysis, and automation.


Installation

install.packages("LLMAgentR")

Development version

To get the latest features or bug fixes, you can install the development version of LLMAgentR from GitHub:

# If needed
install.packages("remotes")

remotes::install_github("knowusuboaky/LLMAgentR")

See the full function reference or the package website for more details.


Environment Setup

API Setup

Sys.setenv(
  OPENAI_API_KEY     = "your-openai-key",
  GROQ_API_KEY       = "your-groq-key",
  ANTHROPIC_API_KEY  = "your-anthropic-key",
  DEEPSEEK_API_KEY   = "your-deepseek-key",
  DASHSCOPE_API_KEY  = "your-dashscope-key",
  GH_MODELS_TOKEN    = "your-github-models-token"
)

LLM Support (Minimal Wrapper)

The chatLLM package allows you to interact with large language models (LLMs) effortlessly - either through direct calls or via reusable minimal wrappers.

Load the Package

library(chatLLM)

Minimal Wrapper Function

Create a lightweight wrapper around call_llm() for reuse. It optionally provides verbose output:


call_llm(
  prompt     = "Summarize the capital of France.",
  provider   = "groq",
  model      = "llama3-8b",
  temperature = 0.7,
  max_tokens = 200,
  verbose = TRUE
)

my_llm_wrapper <- function(prompt, verbose = FALSE) {
  if (verbose) {
    message("[my_llm_wrapper] Sending prompt to LLM...")
  }

  # Suppress console output but always return the response
  response_text <- if (verbose) {
    call_llm(
      prompt     = prompt,
      provider   = "openai",
      model      = "gpt-4o",
      max_tokens = 3000,
      verbose    = TRUE
    )
  } else {
    suppressMessages(
      suppressWarnings(
        call_llm(
          prompt     = prompt,
          provider   = "openai",
          model      = "gpt-4o",
          max_tokens = 3000,
          verbose    = TRUE
        )
      )
    )
  }

  if (verbose) {
    message("[my_llm_wrapper] Response received.")
  }

  return(response_text)
}

Quick Access Shortcut

Alternatively, preconfigure an LLM call for one-liners:

my_llm_wrapper <- call_llm(
      provider    = "openai",
      model       = "gpt-4o",
      max_tokens  = 3000,
      verbose = TRUE
)

Related Package: chatLLM

The chatLLM package (now available on CRAN) offers a modular interface for interacting with LLM providers including OpenAI, Groq, Anthropic, DeepSeek, DashScope, and GitHub Models.

install.packages("chatLLM")

Agent Articles

Detailed guides now live in pkgdown Articles (one per agent):

Custom graph workflows:

A full index page is also available:

License

MIT (c) Kwadwo Daddy Nyame Owusu Boakye

Copy Link

Version

Install

install.packages('LLMAgentR')

Monthly Downloads

601

Version

0.3.2

License

MIT + file LICENSE

Issues

Pull Requests

Stars

Forks

Maintainer

Kwadwo Daddy Nyame Owusu Boakye

Last Published

February 14th, 2026

Functions in LLMAgentR (0.3.2)

state_graph_utils

State Graph Utilities for Custom Agents
build_researcher_agent

Build a Web Researcher Agent
build_sql_agent

Build a SQL Agent Graph
build_custom_agent

Build a Custom Graph-Based Agent
build_doc_summarizer_agent

Build a Document Summarizer Agent
build_data_cleaning_agent

Build a Data Cleaning Agent
as_mermaid

Convert a Custom Graph Spec to Mermaid
build_data_wrangling_agent

Build a Data Wrangling Agent
build_code_agent

Build an R Code-Generation Agent
build_custom_multi_agent

Build a Custom Multi-Agent Team (Supervisor Style)
build_forecasting_agent

Build a Time Series Forecasting Agent
build_interpreter_agent

Build an Interpreter Agent
build_feature_engineering_agent

Build a Feature Engineering Agent
compile_graph

Compile a Custom Agent Graph (LangGraph-Style Output)
build_weather_agent

Build a Weather Agent
build_visualization_agent

Build Visualization Agent
save_mermaid_png

Save Mermaid Diagram as PNG