An R6 class representing an agent that interacts with language models.
*At agent-level we do not automate summarization.* The `maybe_summarize_memory()` function can be called manually if the user wishes to compress the agent's memory.
idUnique ID for this Agent.
context_lengthMaximum number of conversation turns stored in memory.
model_configThe llm_config specifying which LLM to call.
memoryA list of speaker/text pairs that the agent has memorized.
personaNamed list for additional agent-specific details (e.g., role, style).
enable_summarizationLogical. If TRUE, user *may* call `maybe_summarize_memory()`.
token_thresholdNumeric. If manually triggered, we can compare total_tokens.
total_tokensNumeric. Estimated total tokens in memory.
summarization_densityCharacter. "low", "medium", or "high".
summarization_promptCharacter. Optional custom prompt for summarization.
summarizer_configOptional llm_config for summarizing the agent's memory.
auto_inject_conversationLogical. If TRUE, automatically prepend conversation memory if missing.
new()Create a new Agent instance.
Agent$new(
id,
context_length = 5,
persona = NULL,
model_config,
enable_summarization = TRUE,
token_threshold = 1000,
summarization_density = "medium",
summarization_prompt = NULL,
summarizer_config = NULL,
auto_inject_conversation = TRUE
)idCharacter. The agent's unique identifier.
context_lengthNumeric. The maximum number of messages stored (default = 5).
personaA named list of persona details.
model_configAn llm_config object specifying LLM settings.
enable_summarizationLogical. If TRUE, you can manually call summarization.
token_thresholdNumeric. If you're calling summarization, use this threshold if desired.
summarization_densityCharacter. "low", "medium", "high" for summary detail.
summarization_promptCharacter. Optional custom prompt for summarization.
summarizer_configOptional llm_config for summarization calls.
auto_inject_conversationLogical. If TRUE, auto-append conversation memory to prompt if missing.
A new Agent object.
add_memory()Add a new message to the agent's memory. We do NOT automatically call summarization here.
Agent$add_memory(speaker, text)speakerCharacter. The speaker name or ID.
textCharacter. The message content.
maybe_summarize_memory()Manually compress the agent's memory if desired. Summarizes all memory into a single "summary" message.
Agent$maybe_summarize_memory()
generate_prompt()Internal helper to prepare final prompt by substituting placeholders.
Agent$generate_prompt(template, replacements = list())templateCharacter. The prompt template.
replacementsA named list of placeholder values.
Character. The prompt with placeholders replaced.
call_llm_agent()Low-level call to the LLM (via robust call_llm_robust) with a final prompt. If persona is defined, a system message is prepended to help set the role.
Agent$call_llm_agent(prompt, verbose = FALSE)promptCharacter. The final prompt text.
verboseLogical. If TRUE, prints debug info. Default FALSE.
A list with: * text * tokens_sent * tokens_received * full_response (raw list)
generate()Generate a response from the LLM using a prompt template and optional replacements. Substitutes placeholders, calls the LLM, saves output to memory, returns the response.
Agent$generate(prompt_template, replacements = list(), verbose = FALSE)prompt_templateCharacter. The prompt template.
replacementsA named list of placeholder values.
verboseLogical. If TRUE, prints extra info. Default FALSE.
A list with fields text, tokens_sent, tokens_received, full_response.
think()The agent "thinks" about a topic, possibly using the entire memory in the prompt. If auto_inject_conversation is TRUE and the template lacks {{conversation}}, we prepend the memory.
Agent$think(topic, prompt_template, replacements = list(), verbose = FALSE)topicCharacter. Label for the thought.
prompt_templateCharacter. The prompt template.
replacementsNamed list for additional placeholders.
verboseLogical. If TRUE, prints info.
respond()The agent produces a public "response" about a topic. If auto_inject_conversation is TRUE and the template lacks {{conversation}}, we prepend the memory.
Agent$respond(topic, prompt_template, replacements = list(), verbose = FALSE)topicCharacter. A short label for the question/issue.
prompt_templateCharacter. The prompt template.
replacementsNamed list of placeholder substitutions.
verboseLogical. If TRUE, prints extra info.
A list with text, tokens_sent, tokens_received, full_response.
reset_memory()Reset the agent's memory.
Agent$reset_memory()
clone()The objects of this class are cloneable with this method.
Agent$clone(deep = FALSE)deepWhether to make a deep clone.