powered by
A thin, dependency-light R wrapper for Google Gemini API (`models.generateContent` / `models.streamGenerateContent`).
gemini4R( mode, contents, model = "gemini-2.0-flash", store_history = FALSE, api_key = Sys.getenv("GoogleGemini_API_KEY"), max_tokens = 2048, ... )
For non-stream modes, a parsed list. For stream modes, a list with `full_text` and `chunks`.
One of `"text"`, `"stream_text"`, `"chat"`, `"stream_chat"`.
Character vector (single-turn) or list of message objects (chat modes). See Examples.
Gemini model ID. Default `"gemini-2.0-flash"`.
Logical. If TRUE, chat history is persisted to the `chat_history` env-var (JSON).
Your Google Gemini API key (default: `Sys.getenv("GEMINI_API_KEY")`).
Maximum output tokens. NULL for server default.
Additional `httr::POST` options (timeouts etc.).
Satoshi Kume (revised 2025-07-01)
if (FALSE) { gemini4R("text", contents = "Explain how AI works.", max_tokens = 256) }
Run the code above in your browser using DataLab