An R6 class for managing a conversation among multiple Agent objects.
Includes optional conversation-level summarization if `summarizer_config` is provided:
summarizer_config: A list that can contain:
llm_config: The llm_config used for the summarizer call (default a basic OpenAI).
prompt: A custom summarizer prompt (default provided).
threshold: Word-count threshold (default 3000 words).
summary_length: Target length in words for the summary (default 400).
Once the total conversation word count exceeds `threshold`, a summarization is triggered.
The conversation is replaced with a single condensed message that keeps track of who said what.
agentsA named list of Agent objects.
conversation_historyA list of speaker/text pairs for the entire conversation.
conversation_history_fullA list of speaker/text pairs for the entire conversation that is never modified and never used directly.
topicA short string describing the conversation's theme.
promptsAn optional list of prompt templates (may be ignored).
shared_memoryGlobal store that is also fed into each agent's memory.
last_responselast response received
total_tokens_senttotal tokens sent in conversation
total_tokens_receivedtotal tokens received in conversation
summarizer_configConfig list controlling optional conversation-level summarization.
new()Create a new conversation.
LLMConversation$new(topic, prompts = NULL, summarizer_config = NULL)topicCharacter. The conversation topic.
promptsOptional named list of prompt templates.
summarizer_configOptional list controlling conversation-level summarization.
add_agent()Add an Agent to this conversation. The agent is stored by agent$id.
LLMConversation$add_agent(agent)agentAn Agent object.
add_message()Add a message to the global conversation log. Also appended to shared memory. Then possibly trigger summarization if configured.
LLMConversation$add_message(speaker, text)speakerCharacter. Who is speaking?
textCharacter. What they said.
converse()Have a specific agent produce a response. The entire global conversation plus shared memory is temporarily loaded into that agent. Then the new message is recorded in the conversation. The agent's memory is then reset except for its new line.
LLMConversation$converse(
agent_id,
prompt_template,
replacements = list(),
verbose = FALSE
)agent_idCharacter. The ID of the agent to converse.
prompt_templateCharacter. The prompt template for the agent.
replacementsA named list of placeholders to fill in the prompt.
verboseLogical. If TRUE, prints extra info.
run()Run a multi-step conversation among a sequence of agents.
LLMConversation$run(
agent_sequence,
prompt_template,
replacements = list(),
verbose = FALSE
)agent_sequenceCharacter vector of agent IDs in the order they speak.
prompt_templateSingle string or named list of strings keyed by agent ID.
replacementsSingle list or list-of-lists with per-agent placeholders.
verboseLogical. If TRUE, prints extra info.
print_history()Print the conversation so far to the console.
LLMConversation$print_history()
reset_conversation()Clear the global conversation and reset all agents' memories.
LLMConversation$reset_conversation()
|>()Pipe-like operator to chain conversation steps. E.g., conv |> "Solver"(...)
LLMConversation$|>(agent_id)agent_idCharacter. The ID of the agent to call next.
A function that expects (prompt_template, replacements, verbose).
maybe_summarize_conversation()Possibly summarize the conversation if summarizer_config is non-null and the word count of conversation_history exceeds summarizer_config$threshold.
LLMConversation$maybe_summarize_conversation()
summarize_conversation()Summarize the conversation so far into one condensed message. The new conversation history becomes a single message with speaker = "summary".
LLMConversation$summarize_conversation()
clone()The objects of this class are cloneable with this method.
LLMConversation$clone(deep = FALSE)deepWhether to make a deep clone.