The Batch API accepts a JSONL file of requests, processes them within a 24-hour window, and returns the results in an output file. Suitable for: bulk embeddings, offline evaluation, large-scale text processing, or any task that does not require immediate results.
Create a JSONL file where each line is one API request (see format below)
Upload the file: client$files$create(file, purpose = "batch")
Create a batch: client$batch$create(input_file_id, endpoint)
Poll status: client$batch$retrieve(batch_id)
Download results: client$files$content(batch$output_file_id)
Each line in the input file must be:
{"custom_id": "req-1", "method": "POST",
"url": "/v1/chat/completions",
"body": {"model": "gpt-4o-mini",
"messages": [{"role": "user", "content": "Hello!"}]}}
new()BatchClient$new(parent)
create()BatchClient$create(
input_file_id,
endpoint,
completion_window = "24h",
metadata = NULL
)
list()BatchClient$list(after = NULL, limit = NULL)
retrieve()BatchClient$retrieve(batch_id)
cancel()BatchClient$cancel(batch_id)
clone()The objects of this class are cloneable with this method.
BatchClient$clone(deep = FALSE)deepWhether to make a deep clone.
Client for the OpenAI Batch API. Process large volumes of API requests
asynchronously at 50% lower cost than synchronous calls.
Access via client$batch.