Learn R Programming

aisdk: The AI SDK for R

aisdk is a production-grade framework for building AI-powered applications in R. It provides a unified interface for multiple model providers (OpenAI, Anthropic), a powerful agentic system, and seamless integration with the R ecosystem (Shiny, RMarkdown, Quarto).

Features

  • Unified API: Switch between OpenAI, Anthropic, AiHubMix, Gemini and others with a single line of code.
  • Agentic Framework: Built-in support for Agents, Tasks, and Flows.
    • CoderAgent: Writes and edits code.
    • PlannerAgent: Breaks down complex problems.
    • Multi-Agent Teams: Coordinate multiple specialized agents
  • Tool System: Turn any R function into an AI-callable tool with automatic schema generation.
  • Structured Outputs: Generate type-safe JSON, data frames, and complex objects.
  • Chat Sessions: Stateful conversation management with history tracking.
  • Enterprise Ops: Telemetry, hooks, cost tracking, and MCP (Model Context Protocol) support.

Installation

You can install the development version of aisdk from GitHub with:

# install.packages("devtools")
devtools::install_github("YuLab-SMU/aisdk")

Quick Start

Basic Text Generation

library(aisdk)

file.edit(".env")
## set OPENAI_API_KEY and, if needed, OPENAI_BASE_URL / OPENAI_MODEL in .env.

library(dotenv)
load_dot_env()

# Create a model provider and model
provider <- create_openai()
model <- provider$language_model(Sys.getenv("OPENAI_MODEL"))

# Generate text
response <- stream_text(model, "Explain the concept of 'tidy evaluation' in R.")
render_text(response)

Set a Global Default Model

If you use the same model across a project, set it once and let high-level helpers reuse it automatically.

library(aisdk)

# Use a provider:model identifier
set_model("openai:gpt-4o-mini")

# Or store a concrete LanguageModelV1 object
# set_model(create_openai()$language_model("gpt-4o-mini"))

# High-level helpers now work without an explicit model argument
response <- generate_text(prompt = "Summarize the purpose of testthat in R.")
cat(response$text)

chat <- ChatSession$new()
chat$send("Continue with two practical testing tips.")

# Inspect or update the current default
model()
model("anthropic:claude-3-5-sonnet-latest")

The package-wide default model is used by generate_text(), stream_text(), ChatSession, create_chat_session(), auto_fix(), and the {ai} knitr engine when model is omitted.

Advanced Options (AIHubMix Native Interfaces)

AIHubMix provides compatibility layers that let you use Anthropic and Gemini models with their native API structures, unlocking features like Claude Prompt Caching or Gemini structured outputs. aisdk makes this easy with dedicated factory wrappers:

library(aisdk)

# Use Claude models with the Anthropic REST format (unlocks caching)
aihubmix_claude <- create_aihubmix_anthropic(extended_caching = TRUE)
claude_model <- aihubmix_claude$language_model("claude-3-5-sonnet-20241022")

# Use Gemini models with the Gemini REST format
aihubmix_gemini <- create_aihubmix_gemini()
gemini_model <- aihubmix_gemini$language_model("gemini-2.5-flash")

Custom Providers

If you need to connect to a provider not natively supported by aisdk, but which offers an API compatible with OpenAI or Anthropic formats, you can use create_custom_provider() to dynamically generate a provider at runtime:

library(aisdk)

# Create a custom provider compatible with OpenAI's chat completions
custom_provider <- create_custom_provider(
  provider_name = "my_custom_proxy",
  base_url = "https://my-custom-proxy.com/v1",
  api_key = Sys.getenv("CUSTOM_API_KEY"),
  api_format = "chat_completions" # Also supports "anthropic_messages" or "responses"
)

# Use it just like any other provider
custom_model <- custom_provider$language_model("my-custom-model-id")

Building an Agent

Create an agent with tools to solve tasks:

# Define a calculator tool (schema inferred from the function signature)
calc_tool <- tool(
  name = "add_numbers",
  description = "Adds two numbers together",
  execute = function(a, b) a + b
)

# Create an agent
agent <- create_agent(
  name = "Mathematician",
  description = "Solve math problems accurately",
  system_prompt = "You are a helpful mathematician.",
  tools = list(calc_tool)
)

# Run the agent
result <- agent$run("What is 1234 + 5678?", model = model)

# Or stream the response
result <- agent$stream("What is 1234 + 5678?", model = model)

render_text(result)

Stateful Conversations

Agents are stateless by default. To have a multi-turn conversation where the agent remembers previous interactions, create a Chat Session from the agent:

# Create a session from the agent
session <- agent$create_session(model = model)

# First interaction
session$send("What is 1234 + 5678?")

# Follow-up (context is preserved)
session$send("Divide that by 2") 

Interactive Console Chat

If you want a terminal-first workflow, console_chat() provides an interactive REPL on top of ChatSession with built-in agent tooling:

# Start with the default terminal agent
# console_chat("openai:gpt-4o")

# Start in compact chat mode without tools
# console_chat("openai:gpt-4o", agent = NULL)

Current console features include:

  • streaming replies with slash-command session control
  • three output modes: clean, inspect, and debug
  • a persistent status bar showing model, sandbox, stream, and tool state
  • per-turn tool timeline summaries in inspect mode
  • an overlay-backed inspector for the latest turn or an individual tool
  • session persistence via /save and /load

Useful commands:

  • /inspect on, /inspect turn, /inspect tool <index>
  • /inspect next, /inspect prev, /inspect close
  • /debug [on|off], /stream [on|off]
  • /model <id>, /history, /stats, /clear

Skills System

The Skills system allows you to package specialized knowledge and tools that can be dynamically loaded by agents. This saves context window space and keeps your agents focused.

Use the Demo Skill:

# Initialize the registry and tools
registry <- create_skill_registry(system.file("skills", package = "aisdk"))
skill_tools <- create_skill_tools(registry)

# Create an agent
analyst <- create_agent(
  name = "DataAnalyst",
  description = "A data analysis agent",
  system_prompt = "You are an expert data analyst. Use the available skills to help the user.",
  tools = skill_tools # <--- Pass the list of tools, NOT the registry
)

# Run the agent
analyst$stream("Load the data_analysis skill and summarize the mtcars dataset.", model = model)

Example Output:

── ❯ Thinking...    
─ 
─  The user wants me to: 
─  1. Load the data_analysis skill 
─  2. Summarize the mtcars dataset 
─   
─  Let me start by loading the skill to see what scripts are available. 
─ 
── ✔ Done thinking (5 lines) 

  (ℹ Hide with options(aisdk.show_thinking = FALSE)) 


I'll help you load the data_analysis skill and summarize the mtcars dataset. Let me start by loading the skill.
ℹ Calling tool load_skill {"skill_name":"data_analysis"}
✔ Tool load_skill returned: "# Instructions Use the `analyze.R` script to analyze datasets.  ## Available Datasets - iris: Fisher's Iris dataset (150 observations, 5
variables) - mtcars: Motor Trend Car Road Tests (32 observat..."── ❯ Thinking...    
─ 
─  Now I can see that the data_analysis skill has an analyze.R script that can analyze datasets. The mtcars dataset is available, and I should use the analyze.R script with the dataset parameter set to "mtcars". I'll use the default top_n value of 5 as shown in the example. 
─ 
── ✔ Done thinking (1 lines) 

  (ℹ Hide with options(aisdk.show_thinking = FALSE)) 


▶ Calling tool execute_skill_script {"skill_name":"data_analysis","script_name":"analyze.R","args":{"dataset":"mtcars","top_n":5}}
✔ Tool execute_skill_script returned: "Dataset: mtcars Dimensions: 32 rows x 11 cols Columns: mpg, cyl, disp, hp, drat, wt, qsec, vs, am, gear, carb Numeric Summary:
mpg: min=10.40, max=33.90, mean=20.09 cyl: min=4.00, max=8.00, mean=..."── ❯ Thinking...    
─ 
─  Great! I've successfully loaded the data_analysis skill and analyzed the mtcars dataset. The output provides a comprehensive summary of the dataset including: 
─   
─  1. Dataset dimensions (32 rows x 11 columns) 
─  2. Column names 
─  3. Numeric summary statistics (min, max, mean) for all variables 
─  4. First 5 rows of the dataset 
─   
─  This gives a good overview of the mtcars dataset, which contains data about Motor Trend car road tests with various performance and design characteristics for 32 different car models. 
─ 
── ✔ Done thinking (8 lines) 

  (ℹ Hide with options(aisdk.show_thinking = FALSE)) 


I've successfully loaded the data_analysis skill and analyzed the mtcars dataset. Here's a summary:


── Dataset Overview ──- Name: mtcars (Motor Trend Car Road Tests)
- Dimensions: 32 observations × 11 variables
- Variables: mpg, cyl, disp, hp, drat, wt, qsec, vs, am, gear, carb

── Key Statistics Summary ──
| Variable | Min   | Max    | Mean   | Description                     |
| -------- | ----- | ------ | ------ | ------------------------------- |
| mpg      | 10.40 | 33.90  | 20.09  | Miles per gallon                |
| cyl      | 4.00  | 8.00   | 6.19   | Number of cylinders             |
| disp     | 71.10 | 472.00 | 230.72 | Displacement (cu.in.)           |
| hp       | 52.00 | 335.00 | 146.69 | Gross horsepower                |
| drat     | 2.76  | 4.93   | 3.60   | Rear axle ratio                 |
| wt       | 1.51  | 5.42   | 3.22   | Weight (1000 lbs)               |
| qsec     | 14.50 | 22.90  | 17.85  | 1/4 mile time                   |
| vs       | 0.00  | 1.00   | 0.44   | Engine type (0=V, 1=Straight)   |
| am       | 0.00  | 1.00   | 0.41   | Transmission (0=Auto, 1=Manual) |
| gear     | 3.00  | 5.00   | 3.69   | Number of forward gears         |
| carb     | 1.00  | 8.00   | 2.81   | Number of carburetors           |

── Sample Data (First 5 Rows) ──The dataset includes classic cars like the Mazda RX4 (21.0 mpg), Datsun 710 (22.8 mpg), and Hornet models, showing a range of performance characteristics from fuel economy to horsepower and weight.

This dataset is commonly used for regression analysis and exploring relationships between car design parameters and fuel efficiency.

Documentation

Full documentation is available at https://yulab-smu.top/aisdk/.

Copy Link

Version

Install

install.packages('aisdk')

Version

1.1.0

License

MIT + file LICENSE

Issues

Pull Requests

Stars

Forks

Maintainer

Yonghe Xia

Last Published

March 31st, 2026

Functions in aisdk (1.1.0)

AgentTeam

AgentTeam Class
AnthropicLanguageModel

Anthropic Language Model Class
BailianProvider

Bailian Provider Class
AgentRegistry

AgentRegistry Class
ChannelAdapter

Channel Adapter
AiHubMixProvider

AiHubMix Provider Class
AnthropicProvider

Anthropic Provider Class
BailianLanguageModel

Bailian Language Model Class
AUTO_FIX_SYSTEM_PROMPT

System Prompt for Auto-Fix
AiHubMixLanguageModel

AiHubMix Language Model Class
ChannelSessionStore

Channel Session Store
ChatManager

Chat Manager
DeepSeekProvider

DeepSeek Provider Class
FeishuChannelAdapter

Feishu Channel Adapter
FileChannelSessionStore

File Channel Session Store
ChannelRuntime

Channel Runtime
EmbeddingModelV1

Embedding Model V1 (Abstract Base Class)
DeepSeekLanguageModel

DeepSeek Language Model Class
ChatSession

ChatSession Class
GeminiLanguageModel

Gemini Language Model Class
HookHandler

Hook Handler
McpDiscovery

MCP Discovery Class
KNOWN_POSITIONS

Known Position Types with Parameters
McpServer

MCP Server
KNOWN_AESTHETICS

Known Aesthetic Types
McpClient

MCP Client
GeminiProvider

Gemini Provider Class
McpRouter

MCP Router Class
GenerateResult

Generate Result
LanguageModelV1

Language Model V1 (Abstract Base Class)
MissionStep

MissionStep Class
OpenAILanguageModel

OpenAI Language Model Class
MissionHookHandler

MissionHookHandler Class
ObjectStrategy

Object Strategy
Middleware

Middleware (Base Class)
NvidiaProvider

NVIDIA Provider Class
MissionOrchestrator

MissionOrchestrator Class
McpSseClient

MCP SSE Client
NvidiaLanguageModel

NVIDIA Language Model Class
OpenAIEmbeddingModel

OpenAI Embedding Model
SandboxManager

SandboxManager Class
ProviderRegistry

Provider Registry
OutputStrategy

Output Strategy Interface
OpenRouterProvider

OpenRouter Provider Class
OpenRouterLanguageModel

OpenRouter Language Model Class
OpenAIProvider

OpenAI Provider Class
SharedSession

SharedSession Class
OpenAIResponsesLanguageModel

OpenAI Responses Language Model Class
SSEAggregator

SSEAggregator R6 Class
ProjectMemory

Project Memory Class
StepfunProvider

Stepfun Provider Class
SlmEngine

SLM Engine Class
SkillRegistry

SkillRegistry Class
THEME_HIERARCHY

Theme Component Hierarchy
SkillStore

Skill Store Class
VolcengineLanguageModel

Volcengine Language Model Class
THEME_ELEMENT_TYPES

Theme Element Types
StepfunLanguageModel

Stepfun Language Model Class
VolcengineProvider

Volcengine Provider Class
XAILanguageModel

xAI Language Model Class
agent_library

Agent Library: Built-in Agent Specialists
Agent

Agent Class
agent_evals

Performance & Benchmarking: Agent Evals
XAIProvider

xAI Provider Class
aisdk-package

aisdk: AI SDK for R
aiChatServer

AI Chat Server
add_stable_ids

Add Stable IDs to Nested List
agent_registry

Agent Registry: Agent Storage and Lookup
analyze_r_package

Analyze R Package for Skill Creation
aiChatUI

AI Chat UI
auto_fix

Autonomous Data Science Pipelines
benchmark_agent

Benchmark Agent
build_console_system_prompt

Build Console System Prompt
build_context

Build Context
check_api

Connect and Diagnose API Reachability
channel_types

Channel Integration Types
auth_hook

Human-in-the-Loop Authorization
cache_tool

Cache Tool
auto_detect_vars

Auto-detect Variables
channel_runtime

Channel Runtime
apiConfigServer

API Configuration Server
channel_session_store

Channel Session Store
apiConfigUI

API Configuration UI
check_ast_safety

Check AST Safety
api_diagnostics

API Diagnostics
cache

Caching System
annotate_model_capabilities

Annotate model capabilities based on ID
capture_traceback

Capture Traceback
build_fix_prompt

Build Fix Prompt
configure_api

Launch API Configuration App
Computer

Computer Class
console_input

Console Text Input
console_menu

Console Interactive Menu
channel_documents

Channel Document Ingest
context

Context Management
core_api

Core API: High-Level Functions
check_sdk_compatibility

Check SDK Version Compatibility
channel_feishu

Feishu Channel Adapter
console

Console Chat: Interactive REPL
console_confirm

Console Confirmation Prompt
content_image

Create Image Content
content_text

Create Text Content
console_chat

Start Console Chat
create_agent

Create an Agent
console_app

Console App State Helpers
core_object

Core Object API: Structured Output Generation
create_deepseek_anthropic

Create DeepSeek Provider (Anthropic API Format)
create_anthropic

Create Anthropic Provider
create_delegate_tool

Create a Delegate Tool for an Agent
console_frame

Console Frame Helpers
create_artifact_dir

Artifact Tools for File Persistence
create_agent_registry

Create an Agent Registry
create_channel_runtime

Create a Channel Runtime
create_bailian

Create Alibaba Cloud Bailian Provider
clear_ai_session

Clear AI Engine Session
compat

Compatibility Layer: Feature Flags and Migration Support
create_custom_provider

Create a custom provider
create_aihubmix

Create AiHubMix Provider
create_console_tools

Create Console Tools
console_agent

Console Agent: Intelligent Terminal Assistant
create_aihubmix_anthropic

Create AiHubMix Provider (Anthropic API Format)
create_aihubmix_gemini

Create AiHubMix Provider (Gemini API Format)
construct_prompt

Construct Prompt
create_embeddings

Create Embeddings
create_env_agent

Create an EnvAgent
create_mission

Create a Mission
create_mcp_sse_client

Create MCP SSE Client
console_setup

Console Setup Helpers
create_mission_hooks

Create Mission Hooks
create_mission_orchestrator

Create a Mission Orchestrator
create_gemini

Create Gemini Provider
create_hooks

Create Hooks
create_flow

Create a Flow
create_invalid_tool_handler

Create Invalid Tool Handler
create_coder_agent

Create a CoderAgent
create_file_agent

Create a FileAgent
create_file_channel_session_store

Create a File Channel Session Store
create_chat_session

Create a Chat Session
create_orchestration

Create Orchestration Flow (Compatibility Wrapper)
create_openrouter

Create OpenRouter Provider
create_skill_registry

Create a Skill Registry
create_skill

Create Skill Scaffold
create_shared_session

Create a Shared Session
create_stream_renderer

Create a Stream Renderer
create_stepfun

Create Stepfun Provider
create_standard_registry

Create Standard Agent Registry
create_schema_from_func

Create Schema from Function
create_nvidia

Create NVIDIA Provider
create_openai

Create OpenAI Provider
create_session

Create Session (Compatibility Wrapper)
enable_api_tests

Check if API tests should be enabled
create_skill_tools

Create Skill Tools
eng_ai

AI Engine Function
create_feishu_event_processor

Create a Feishu Event Processor
create_data_agent

Create a DataAgent
create_feishu_webhook_handler

Create a Feishu Webhook Handler
create_permission_hook

Create Permission Hook
create_deepseek

Create DeepSeek Provider
extract_guides

Extract Guides
create_step

Create a MissionStep
create_computer_tools

Create Computer Tools
debug_log

Debug Log Helper
deprecation_warning

Deprecation Warning Helper
extract_r_code

Extract R Code
expect_no_hallucination

Expect No Hallucination
create_feishu_channel_adapter

Create a Feishu Channel Adapter
create_console_agent

Create Console Agent
create_mcp_client

Create an MCP Client
create_feishu_channel_runtime

Create a Feishu Channel Runtime
create_r_code_tool

Create R Code Interpreter Tool
create_mcp_server

Create an MCP Server
create_xai

Create xAI Provider
create_z_ggtree

Create Schema for ggtree Function
extract_theme_for_frontend

Extract Theme for Frontend
create_sandbox_system_prompt

Create Sandbox System Prompt
extract_theme_values

Extract Theme Values
create_planner_agent

Create a PlannerAgent
create_skill_forge_tools

Create Skill Forge Tools
.sdk_features

SDK Feature Flags
download_model

Download Model from Hugging Face
create_skill_architect_agent

Create a SkillArchitect Agent
execute_tool_calls

Execute Tool Calls
generate_model_docs

Generate Document Strings for Models
generate_stable_id

Generate Stable ID
create_team

Create an Agent Team
create_telemetry

Create Telemetry
expect_llm_pass

Expect LLM Pass
fetch_api_models

Fetch available models from API provider
create_visualizer_agent

Create a VisualizerAgent
expect_tool_selection

Expect Tool Selection
get_ai_session

Get AI Engine Session
extract_code_from_response

Extract Code from LLM Response
create_volcengine

Create Volcengine/Ark Provider
get_default_registry

Get Default Registry
Flow

Flow Class
find_tool

Find Tool by Name
ggplot_to_z_object

Convert ggplot Object to Schema-Compliant Structure
generate_fix

Generate Fix
get_default_system_prompt

Get Default System Prompt
generate_hypothesis

Generate Hypothesis
extract_geom_params

Extract Geom Parameters from ggproto Object
get_anthropic_model

Get Anthropic model name from environment
get_openai_embedding_model

Get OpenAI Embedding Model from environment
get_openai_model

Get OpenAI Model from environment
get_openai_model_id

Get OpenAI Model ID from environment
get_anthropic_base_url

Get Anthropic base URL from environment
mcp_initialize_request

Create MCP initialize request
install_skill

Install a Skill
init_skill

Initialize a New Skill
hooks

Hooks System
has_api_key

Check if specific provider key is available
load_chat_session

Load a Chat Session
get_anthropic_model_id

Get Anthropic model ID from environment
mcp_resources_read_request

Create MCP resources/read request
mcp_tools_call_request

Create MCP tools/call request
get_or_create_session

Get or Create Session
mcp_initialized_notification

Create MCP initialized notification
find_closest_match

Find Closest Match
mcp_resources_list_request

Create MCP resources/list request
load_models_config

Load Models Configuration
parse_tool_arguments

Parse Tool Arguments
post_to_api

Post to API with Retry
generate_text

Generate Text
generate_verification_hypothesis

Generate Verification Hypothesis
provider_aihubmix

AiHubMix Provider
mcp_tools_list_request

Create MCP tools/list request
knitr_engine

Knitr Engine for AI
get_model

Get Default Model
get_skill_store

Get Skill Store
get_memory

Get or Create Global Memory
get_r_context

Get R Context
jsonrpc_response

Create a JSON-RPC 2.0 success response object
jsonrpc_request

Create a JSON-RPC 2.0 request object
print_migration_guide

Print Migration Guide
mcp_router

Create MCP Router
print.z_schema

Print Method for z_schema
list_local_models

List Available Local Models
mcp_serialize

Serialize a JSON-RPC message for MCP transport
map_anthropic_chunk

Map native Anthropic SSE event to aggregator calls
map_openai_chunk

Map OpenAI SSE chunk to aggregator calls
hypothesis_fix_verify

Hypothesis-Fix-Verify Loop
private_calculate_tool_accuracy

Calculate Tool Accuracy
reactive_tool

Reactive Tool
r_data_tasks

Create R Data Tasks Benchmark
provider_gemini

Gemini Provider
provider_deepseek

DeepSeek Provider
provider_anthropic

Anthropic Provider
mission_orchestrator

Mission Orchestrator: Concurrent Mission Scheduling
print.GenerateObjectResult

Print GenerateObjectResult
mission_hooks

Mission Hook System
print.benchmark_result

Print Benchmark Result
multimodal

Multimodal Helpers
list_models

List Models for Provider
list_skills

List Installed Skills
provider_openrouter

OpenRouter Provider
get_openai_base_url

Get OpenAI Base URL from environment
ggplot_to_frontend_json

Export ggplot as Frontend-Ready JSON
get_model_info

Get Full Model Info
provider_stepfun

Stepfun Provider
package_skill

Package a Skill
jsonrpc_error

Create a JSON-RPC 2.0 error response object
mcp_discover

Distributed MCP Ecosystem
mcp_deserialize

Deserialize a JSON-RPC message from MCP transport
is_ggplot_empty

Check if Value is Empty (ggplot2 semantics)
ggplot_to_frontend_object

Convert ggplot to Frontend-Friendly Object
scan_skills

Scan for Skills
session

Session Management: Stateful Chat Sessions
sanitize_theme_element

Sanitize Theme Element for Frontend
provider_openai

OpenAI Provider
provider_nvidia

NVIDIA Provider
set_model

Set Default Model
skill_store

Global Skill Store
skill_registry

Skill Registry: Scan and Manage Skills
sdk_list_features

List Feature Flags
sdk_feature

Get Feature Flag
project_memory

Project Memory System
sdk_clear_protected_vars

Reset the Variable Registry
provider_volcengine

Volcengine Provider
remove_r_code_blocks

Remove R Code Blocks
sanitize_for_json

Sanitize Object for JSON Serialization
sanitize_r_code

Sanitize R Code
render_text

Render Markdown Text
schema

Schema DSL: Lightweight JSON Schema Generator
repair_tool_call

Repair Tool Call
repair_json_string

Repair JSON String
safe_eval

Safe Eval with Timeout
safe_parse_json

Safe JSON Parser
provider_xai

xAI Provider
register_ai_engine

Register AI Engine
tool_result_message

Create Tool Result Message
sdk_protect_var

Protect a Variable from Agent Modification
Skill

Skill Class
skill_manifest

Skill Manifest Specification
stream_anthropic

Stream from Anthropic API
stream_from_api

Stream from API
stream_text

Stream Text
reload_env

Reload project-level environment variables
summarize_object

Summarize Object
shared_session

SharedSession: Enhanced Multi-Agent Session Management
slm_engine

Native SLM (Small Language Model) Engine
test_new_skill

Test a Newly Created Skill
spec_model

Specification Layer: Model Interfaces
should_show_thinking

Check if thinking content should be shown
utils_models

Model List Management Utilities
uninstall_skill

Uninstall a Skill
request_authorization

Request User Authorization (HITL)
utils_models_update

Model Synchronization Utilities
schema_generator

Schema Generator
sdk_reset_features

Reset Feature Flags
team

Agent Team: Automated Multi-Agent Orchestration
sdk_set_feature

Set Feature Flag
migrate_pattern

Migrate Legacy Code
update_provider_models

Update Provider Models
Telemetry

Telemetry Class
z_any

Create Any Schema
utils_ast

AST Safety Analysis
utils_capture

Capture R Console Output
z_array

Create Array Schema
z_layer

Layer Schema
model

Model Shortcut
Mission

Mission Class
to_snake_case

Convert to Snake Case
schema_to_json

Convert Schema to JSON
run_feishu_webhook_server

Run a Feishu Webhook Server
utils_cli

CLI Utils: Markdown and Tool Rendering
utils_json

JSON Utilities
z_dataframe

Create Dataframe Schema
utils_http

Utilities: HTTP and Retry Logic
z_describe

Describe Schema
utils_env

Environment Configuration Utilities
z_number

Create Number Schema
z_theme

Theme Schema
z_theme_element

Create Element Schema
z_facet

Facet Schema
z_enum

Create Enum Schema
variable_registry

Variable Registry
utils_registry

Utilities: Provider Registry
update_renviron

Update .Renviron with new values
z_empty_aware

Create Empty-Aware Schema Wrapper
model_defaults

Default Model Configuration
safe_to_json

Safe Serialization to JSON
sdk_get_var_metadata

Get Metadata for a Protected Variable
provider_custom

Custom Provider Factory
sandbox

R-Native Programmatic Sandbox
provider_bailian

Alibaba Cloud Bailian Provider
z_guide

Guide Schema
z_empty_object

Create Empty Object Schema
z_integer

Create Integer Schema
schema_to_list

Convert Schema to Plain List
sse_aggregator

SSE Stream Aggregator
sdk_unprotect_var

Unprotect a Variable
sdk_is_var_locked

Check if a Variable is Locked
search_skills

Search Skills
stdlib_agents

Standard Agent Library: Built-in Specialist Agents
strategy

Output Strategy System
start_feishu_webhook_server

Start a Feishu Webhook Server
z_aes_mapping

Aesthetic Mapping Schema
z_position

Position Adjustment Schema
wrap_reactive_tools

Wrap Reactive Tools
utils_middleware

Utilities: Middleware System
z_object

Create Object Schema
utils_mcp

MCP Utility Functions
stream_renderer

Stream Renderer: Enhanced CLI output
Tool

Tool Class
walk_ast

Walk an Abstract Syntax Tree
tool

Create a Tool
stream_responses_api

Stream from Responses API
wrap_language_model

Wrap Language Model with Middleware
z_boolean

Create Boolean Schema
z_coord

Coordinate System Schema
z_geom_layer

Build Geom-Specific Layer Schema
z_ggplot

GGPlot Object Schema
z_string

Create String Schema
z_scale

Scale Schema