
Last chance! 50% off unlimited learning
Sale ends in
make_query
generates structured input for a language model, including
system prompt, user messages, and optional examples (assistant answers).
make_query(
text,
prompt,
template = "{prefix}{text}\n{prompt}\n{suffix}",
system = NULL,
prefix = NULL,
suffix = NULL,
examples = NULL
)
A list of tibbles, one for each input text
, containing structured
rows for system messages, user messages, and assistant responses.
A character vector of texts to be annotated.
A string defining the main task or question to be passed to the language model.
A string template for formatting user queries, containing
placeholders like {text}
, {prefix}
, and {suffix}
.
An optional string to specify a system prompt.
A prefix string to prepend to each user query.
A suffix string to append to each user query.
A tibble
with columns text
and answer
, representing
example user messages and corresponding assistant responses.
The function supports the inclusion of examples, which are dynamically added to the structured input. Each example follows the same format as the primary user query.
template <- "{prefix}{text}\n\n{prompt}{suffix}"
examples <- tibble::tribble(
~text, ~answer,
"This movie was amazing, with great acting and story.", "positive",
"The film was okay, but not particularly memorable.", "neutral",
"I found this movie boring and poorly made.", "negative"
)
queries <- make_query(
text = c("A stunning visual spectacle.", "Predictable but well-acted."),
prompt = "Classify sentiment as positive, neutral, or negative.",
template = template,
system = "Provide a sentiment classification.",
prefix = "Review: ",
suffix = " Please classify.",
examples = examples
)
print(queries)
if (ping_ollama()) { # only run this example when Ollama is running
query(queries, screen = TRUE, output = "text")
}
Run the code above in your browser using DataLab