Client for the OpenAI Moderations API. Classifies text (and optionally
images) for potentially harmful content according to OpenAI's usage policies.
Access via client$moderations.
hate, hate/threatening, harassment, harassment/threatening,
self-harm, self-harm/intent, self-harm/instructions,
sexual, sexual/minors, violence, violence/graphic.
new()ModerationsClient$new(parent)
create()ModerationsClient$create(input, model = "omni-moderation-latest")
clone()The objects of this class are cloneable with this method.
ModerationsClient$clone(deep = FALSE)deepWhether to make a deep clone.
The API returns a set of category flags and confidence scores. It is free to use and does not count against your token quota.