Learn R Programming

gemini.R (version 0.8.0)

gemini_chat: Multi-turn conversations (chat)

Description

Generate text from text with Gemini

Usage

gemini_chat(
  prompt,
  history = list(),
  model = "1.5-flash",
  temperature = 0.5,
  maxOutputTokens = 1024
)

Value

Generated text

Arguments

prompt

The prompt to generate text from

history

history object to keep track of the conversation

model

The model to use. Options are '1.5-flash', '1.5-pro', '1.0-pro' and '2.0-flash-exp'. Default is '1.5-flash' see https://ai.google.dev/gemini-api/docs/models/gemini

temperature

The temperature to use. Default is 0.5 value should be between 0 and 2 see https://ai.google.dev/gemini-api/docs/models/generative-models#model-parameters

maxOutputTokens

The maximum number of tokens to generate. Default is 1024 and 100 tokens correspond to roughly 60-80 words.

See Also

https://ai.google.dev/docs/gemini_api_overview#chat

Examples

Run this code
if (FALSE) {
library(gemini.R)
setAPI("YOUR_API_KEY")

chats <- gemini_chat("Pretend you're a snowman and stay in character for each")
print(chats$outputs)

chats <- gemini_chat("What's your favorite season of the year?", chats$history)
print(chats$outputs)

chats <- gemini_chat("How do you think about summer?", chats$history)
print(chats$outputs)
}

Run the code above in your browser using DataLab