Learn R Programming

edgemodelr (version 0.2.0)

edge_download_model: Download a GGUF model from Hugging Face

Description

Download a GGUF model from Hugging Face

Usage

edge_download_model(
  model_id,
  filename,
  cache_dir = NULL,
  force_download = FALSE,
  verify_checksum = TRUE,
  expected_sha256 = NULL,
  trust_first_use = FALSE,
  verbose = TRUE
)

Value

Path to the downloaded model file

Arguments

model_id

Hugging Face model identifier (e.g., "TheBloke/TinyLlama-1.1B-Chat-v1.0-GGUF")

filename

Specific GGUF file to download

cache_dir

Directory to store downloaded models (default: "~/.cache/edgemodelr")

force_download

Force re-download even if file exists

verify_checksum

Verify SHA-256 checksum if available (default: TRUE)

expected_sha256

Optional expected SHA-256 hash for the model file

trust_first_use

Store a local hash if no known hash exists (default: FALSE)

verbose

Whether to print download progress messages

Examples

Run this code
if (FALSE) {
# Download TinyLlama model (large file, not run in checks)
model_path <- edge_download_model(
  model_id = "TheBloke/TinyLlama-1.1B-Chat-v1.0-GGUF",
  filename = "tinyllama-1.1b-chat-v1.0.Q4_K_M.gguf"
)

# Use the downloaded model
if (file.exists(model_path)) {
  ctx <- edge_load_model(model_path)
  response <- edge_completion(ctx, "Hello, how are you?")
  edge_free_model(ctx)
}
}

Run the code above in your browser using DataLab