Learn R Programming

aisdk (version 1.1.0)

create_volcengine: Create Volcengine/Ark Provider

Description

Factory function to create a Volcengine provider using the Ark API.

Usage

create_volcengine(api_key = NULL, base_url = NULL, headers = NULL)

Value

A VolcengineProvider object.

Arguments

api_key

Volcengine API key. Defaults to ARK_API_KEY env var.

base_url

Base URL for API calls. Defaults to https://ark.cn-beijing.volces.com/api/v3.

headers

Optional additional headers.

Supported Models

  • doubao-lite-128k-240428: Model: doubao-lite-128k-240428

  • doubao-pro-128k-240515: Model: doubao-pro-128k-240515

  • doubao-lite-4k-240328: Model: doubao-lite-4k-240328

  • doubao-lite-32k-240428: Model: doubao-lite-32k-240428

  • doubao-pro-4k-240515: Model: doubao-pro-4k-240515

  • doubao-lite-4k-character-240515: Model: doubao-lite-4k-character-240515

  • doubao-embedding-text-240515: Model: doubao-embedding-text-240515

  • mistral-7b-instruct-v0.2: Model: mistral-7b-instruct-v0.2 (Vision)

  • doubao-pro-4k-character-240515: Model: doubao-pro-4k-character-240515

  • doubao-pro-4k-functioncall-240515: Model: doubao-pro-4k-functioncall-240515

  • doubao-lite-4k-pretrain-character-240516: Model: doubao-lite-4k-pretrain-character-240516

  • doubao-pro-32k-character-240528: Model: doubao-pro-32k-character-240528

  • doubao-pro-4k-browsing-240524: Model: doubao-pro-4k-browsing-240524

  • doubao-pro-32k-functioncall-240515: Model: doubao-pro-32k-functioncall-240515

  • doubao-pro-4k-functioncall-240615: Model: doubao-pro-4k-functioncall-240615

  • ... and 123 more models. Use list_models("volcengine") to see all.

API Formats

Volcengine supports both Chat Completions API and Responses API:

  • language_model(): Uses Chat Completions API (standard)

  • responses_model(): Uses Responses API (for reasoning models)

  • smart_model(): Auto-selects based on model ID

Token Limit Parameters for Volcengine Responses API

Volcengine's Responses API has two mutually exclusive token limit parameters:

  • max_output_tokens: Total limit including reasoning + answer (default mapping)

  • max_tokens (API level): Answer-only limit, excluding reasoning

The SDK's unified max_tokens parameter maps to max_output_tokens by default, which is the safe choice to prevent runaway reasoning costs.

For advanced users who want answer-only limits:

  • Use max_answer_tokens parameter to explicitly set answer-only limit

  • Use max_output_tokens parameter to explicitly set total limit

Examples

Run this code
# \donttest{
if (interactive()) {
    volcengine <- create_volcengine()

    # Chat API (standard models)
    model <- volcengine$language_model("doubao-1-5-pro-256k-250115")
    result <- generate_text(model, "Hello")

    # Responses API (reasoning models like DeepSeek)
    model <- volcengine$responses_model("deepseek-r1-250120")

    # Default: max_tokens limits total output (reasoning + answer)
    result <- model$generate(messages = msgs, max_tokens = 2000)

    # Advanced: limit only the answer part (reasoning can be longer)
    result <- model$generate(messages = msgs, max_answer_tokens = 500)

    # Smart model selection (auto-detects best API)
    model <- volcengine$smart_model("deepseek-r1-250120")
}
# }

Run the code above in your browser using DataLab