Learn R Programming

tidyprompt

‘tidyprompt’ is an R package to easily construct prompts and associated logic for interacting with large language models (‘LLMs’).

Think of ‘tidyprompt’ as the ‘ggplot2’ package for creating prompts and handling LLM interactions. ‘tidyprompt’ introduces the concept of prompt wraps, which are building blocks that you can use to quickly turn a simple prompt into an advanced one. Prompt wraps do not just modify the prompt text, but also add extraction and validation functions that will be applied to the response of a LLM. Moreover, these functions can send feedback to the LLM.

With ‘tidyprompt’ and prompt wraps, you can add various features to your prompts and define how they are evaluated by LLMs. For example:

  • structured output: Obtain structured output from a LLM, adhering to a specific type and/or format. Use pre-built prompt wraps or your own R code to validate.

  • feedback & retries: Automatically provide feedback to a LLM when the output is not as expected, allowing the LLM to retry their answer.

  • reasoning modes: Make a LLM answer a prompt in a specific mode, such as chain-of-thought or ReAct (Reasoning and Acting).

  • function calling: Give a LLM the ability to autonomously call R functions (‘tools’). With this, the LLM can retrieve information or take other actions. ‘tidyprompt’ also supports R code generation and evaluation, allowing LLMs to run R code. Tools from Model Context Protocol (MCP) servers are also supported

With its features, ‘tidyprompt’ extends the functionality of LLMs beyond what is natively offered by LLM APIs, and you can elegantly design complex, robust interactions with LLMs.

‘tidyprompt’ is compatible with the ‘ellmer’ R package, which has gained popularity for interfacing with LLM APIs. ‘tidyprompt’ supports connecting to LLM providers from ‘ellmer’ chat objects, and ‘ellmer’ definitions for structured output and tools (see more information below).

Installation

Install the development version from GitHub:

# install.packages("remotes")
remotes::install_github("KennispuntTwente/tidyprompt")

Or install from CRAN (0.2.0):

install.packages("tidyprompt")

Getting started

See the ‘Getting started’ vignette for a detailed introduction to using ‘tidyprompt’.

Examples

Here are some quick examples of what you can do with ‘tidyprompt’:

"What is 5+5?" |>
  answer_as_integer() |>
  send_prompt(llm_provider_ollama())
#> [1] 10
"Are you a large language model?" |>
  answer_as_boolean() |>
  send_prompt(llm_provider_ollama())
#> [1] TRUE
"What animal is the biggest?" |>
  answer_as_regex_match("^(cat|dog|elephant)$") |>
  send_prompt(llm_provider_ollama())
#> [1] "elephant"
# Make LLM use a function from an R package to search Wikipedia for the answer
"What is something fun that happened in November 2024?" |>
  answer_as_text(max_words = 25) |>
  answer_using_tools(getwiki::search_wiki) |>
  send_prompt(llm_provider_ollama())
#> [1] "The 2024 ARIA Music Awards ceremony, a vibrant celebration of Australian music,
#> took place on November 20, 2024."
# From prompt to linear model object in R
model <- paste0(
  "Using my data, create a statistical model",
  " investigating the relationship between two variables."
) |>
  answer_using_r(
    objects_to_use = list(data = cars),
    evaluate_code = TRUE,
    return_mode = "object"
  ) |>
  prompt_wrap(
    validation_fn = function(x) {
      if (!inherits(x, "lm"))
        return(llm_feedback("The output should be a linear model object."))
      return(TRUE)
    }
  ) |>
  send_prompt(llm_provider_ollama())
summary(model)
#> Call:
#> lm(formula = speed ~ dist, data = data)
#> 
#> Residuals:
#>     Min      1Q  Median      3Q     Max 
#> -7.5293 -2.1550  0.3615  2.4377  6.4179 
#> 
#> Coefficients:
#>             Estimate Std. Error t value Pr(>|t|)    
#> (Intercept)  8.28391    0.87438   9.474 1.44e-12 ***
#> dist         0.16557    0.01749   9.464 1.49e-12 ***
#> ---
#> Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
#> 
#> Residual standard error: 3.156 on 48 degrees of freedom
#> Multiple R-squared:  0.6511, Adjusted R-squared:  0.6438 
#> F-statistic: 89.57 on 1 and 48 DF,  p-value: 1.49e-12
# Escape validation on questions that cannot be answered
"How many years old is my neighbour's dog?" |>
  answer_as_integer() |>
  quit_if() |>
  send_prompt(llm_provider_ollama())
#> NULL
# LLM in the loop; 
#   LLM verifies answer of LLM and can provide feedback
"What is the capital of France?" |>
  llm_verify() |>
  send_prompt(llm_provider_ollama())
#> ...
  
# Human in the loop; 
#   user verifies answer of LLM and can provide feedback
"What is the capital of France?" |>
  user_verify() |>
  send_prompt(llm_provider_ollama())
#> ...

More information and contributing

‘tidyprompt’ is developed by Luka Koning (l.koning@kennispunttwente.nl) and Tjark van de Merwe (t.vandemerwe@kennispunttwente.nl).

If you encounter issues, have questions, or have suggestions, please open an issue in the GitHub repository. You are also welcome to contribute to the package by opening a pull request.

Why ‘tidyprompt’?

We designed ‘tidyprompt’ because we found ourselves writing code repeatedly to both construct prompts and handle the associated output of LLMs; these tasks were intertwined. Often times, we also wanted to add features to our prompts, or take them away, which required us to rewrite a lot of code. Thus, we wanted to have building blocks with which we could easily construct prompts and simultaneously add code to handle the output of LLMs. This led us to a design inspired by piping syntax, as popularized by the ‘tidyverse’ and familiar to many R users.

‘tidyprompt’ should be seen as a tool which can be used to enhance the functionality of LLMs beyond what APIs natively offer. It is designed to be flexible and provider-agnostic, so that its features can be used with a wide range of LLM providers and models. It is primarily focused on ‘text-based’ handling of LLMs, where textual output is parsed to achieve structured output and other functionalities.

Several LLM providers and models also offers forms of ‘native’ handling, where the LLM is directly controlled by the LLM provider to provide output in a certain manner. Where appropriate, ‘tidyprompt’ supports such native configuration of specific APIs. Currently, answer_as_json() and answer_using_tools() offer native support for adhering to JSON schemas and calling functions.

The functions introduced by ‘tidyprompt’ can go beyond what is enforced by native handling, for example by adding additional validation or feedback as defined by your R code (i.e., logic and actions which cannot be captured in just a structured output schema).

How does ‘tidyprompt’ relate to ‘ellmer’ and ‘tidyllm’?

‘tidyprompt’ is less focused on interfacing with the APIs of various LLM providers, like R packages ‘ellmer’ and ‘tidyllm’ do. Instead, ‘tidyprompt’ is primarily focused on offering a framework for constructing prompts and associated logic for interactions with LLMs.

We aim to design ‘tidyprompt’ in such a way that it can be compatible with ‘ellmer’, ‘tidyllm’, and any other packages offering an interface to LLM APIs.

‘ellmer’ specifically has surfaced as the most popular R package for interfacing with LLM APIs. Therefore, we have introduced a LLM provider which can be built from an ‘ellmer’ chat object (see: tidyprompt::llm_provider_ellmer()). This allows users to use any LLM provider that can be configured with ‘ellmer’, including the respective configuration and features from the ‘ellmer’ package.

Furthermore, answer_as_json() and answer_using_tools() support ‘ellmer’ defintions for structured output and tools. When using an ‘ellmer’ LLM provider, these functions will also call the native ‘ellmer’ functions to obtain structured output and register tools. (And because mcptools::mcp_tools() returns ‘ellmer’ tool definitions, answer_using_tools() also supports tools from MCP servers.)

This means that ‘tidyprompt’ extends what is possible with ‘ellmer’ (and vice versa).

Copy Link

Version

Install

install.packages('tidyprompt')

Monthly Downloads

663

Version

0.3.0

License

GPL (>= 3) | file LICENSE

Issues

Pull Requests

Stars

Forks

Maintainer

Luka Koning

Last Published

November 30th, 2025

Functions in tidyprompt (0.3.0)

add_msg_to_chat_history

Add a message to a chat history
add_image

Add an image to a tidyprompt (multimodal)
answer_as_category

Make LLM answer as a category
answer_as_list

Make LLM answer as a list of items
answer_as_multi_category

Build prompt for categorizing a text into multiple categories
answer_as_integer

Make LLM answer as an integer (between min and max)
answer_as_boolean

Make LLM answer as a boolean (TRUE or FALSE)
chat_history

Create or validate chat_history object
answer_as_named_list

Make LLM answer as a named list
answer_using_sql

Enable LLM to draft and execute SQL queries on a database
answer_using_r

Enable LLM to draft and execute R code
answer_as_text

Make LLM answer as a constrained text response
answer_by_react

Set ReAct mode for a prompt
answer_as_regex_match

Make LLM answer match a specific regex
answer_by_chain_of_thought

Set chain of thought mode for a prompt
chat_history.character

Method for chat_history() when the input is a single string
answer_using_tools

Enable LLM to call R functions (and/or MCP server tools)
extract_from_return_list

Function to extract a specific element from a list
get_chat_history

Get the chat history of a tidyprompt object
is_tidyprompt

Check if object is a tidyprompt object
df_to_string

Convert a dataframe to a string representation
llm_break_soft

Create an llm_break_soft object
get_prompt_wraps

Get prompt wraps from a tidyprompt object
chat_history.default

Default method for chat_history()
construct_prompt_text

Construct prompt text from a tidyprompt object
llm_break

Create an llm_break object
chat_history.data.frame

Method for chat_history() when the input is a data.frame
llm_provider_ellmer

Create a new LLM provider from an ellmer::chat() object
llm_feedback

Create an llm_feedback object
llm_provider_groq

Create a new Groq LLM provider
llm_provider_mistral

Create a new Mistral LLM provider
llm_provider-class

LlmProvider R6 Class
llm_provider_google_gemini

Create a new Google Gemini LLM provider
llm_provider_openrouter

Create a new OpenRouter LLM provider
llm_provider_ollama

Create a new Ollama LLM provider
llm_provider_openai

Create a new OpenAI LLM provider
llm_provider_xai

Create a new XAI (Grok) LLM provider
quit_if

Make evaluation of a prompt stop if LLM gives a specific response
prompt_wrap

Wrap a prompt with functions for modification and handling the LLM response
skim_with_labels_and_levels

Skim a dataframe and include labels and levels
r_json_schema_to_example

Generate an example object from a JSON schema
persistent_chat-class

PersistentChat R6 class
llm_verify

Have LLM check the result of a prompt (LLM-in-the-loop)
set_system_prompt

Set system prompt of a tidyprompt object
set_chat_history

Set the chat history of a tidyprompt object
send_prompt

Send a prompt to a LLM provider
provider_prompt_wrap

Create a provider-level prompt wrap
tidyprompt-class

Tidyprompt R6 Class
vector_list_to_string

Convert a named or unnamed list/vector to a string representation
tidyprompt-package

tidyprompt: Prompt Large Language Models and Enhance Their Functionality
tools_get_docs

Extract documentation from a function
tidyprompt

Create a tidyprompt object
user_verify

Have user check the result of a prompt (human-in-the-loop)
tools_add_docs

Add tidyprompt function documentation to a function
add_text

Add text to a tidyprompt
answer_as_key_value

Make LLM answer as a list of key-value pairs
answer_as_json

Make LLM answer as JSON (with optional schema; structured output)