Learn R Programming

openaiRtools (version 0.2.2)

create_moderation: Check Content for Policy Violations (Convenience Function)

Description

Shortcut that creates an OpenAI client from the OPENAI_API_KEY environment variable and calls client$moderations$create().

Usage

create_moderation(input, model = "omni-moderation-latest")

Value

A list with $results — a list of result objects, one per input. Each result has:

  • $flagged — Logical, TRUE if content violates policy

  • $categories — Named list of boolean flags per category

  • $category_scores — Named list of scores (0–1) per category

Arguments

input

Required. A character string or list of strings to moderate. Example: "Kill all humans" or list("Hello", "I hate you").

model

Character. Moderation model to use: "omni-moderation-latest" (default, supports images) or "text-moderation-latest".

Details

This API is free and does not consume tokens. Use it to screen user-generated content before passing it to other APIs.

Examples

Run this code
if (FALSE) {
Sys.setenv(OPENAI_API_KEY = "sk-xxxxxx")

# Quick single-text check
result <- create_moderation("I love everyone!")
cat("Flagged:", result$results[[1]]$flagged) # FALSE

# Screen multiple texts from user input
texts <- list("normal message", "harmful content example")
result <- create_moderation(texts)
for (i in seq_along(result$results)) {
  cat("Text", i, "flagged:", result$results[[i]]$flagged, "\n")
}
}

Run the code above in your browser using DataLab