Learn R Programming

edgemodelr (version 0.1.6)

edge_small_model_config: Get optimized configuration for small language models

Description

Returns recommended parameters for loading and using small models (1B-3B parameters) to maximize inference speed on resource-constrained devices.

Usage

edge_small_model_config(
  model_size_mb = NULL,
  available_ram_gb = NULL,
  target = "laptop"
)

Value

List with optimized parameters for edge_load_model() and edge_completion()

Arguments

model_size_mb

Model file size in MB (if known). If NULL, uses conservative defaults.

available_ram_gb

Available system RAM in GB. If NULL, uses conservative defaults.

target

Device target: "mobile", "laptop", "desktop", or "server" (default: "laptop")

Examples

Run this code
# Get optimized config for a 700MB model on a laptop
config <- edge_small_model_config(model_size_mb = 700, available_ram_gb = 8)

# Use the config to load a model
if (FALSE) {
model_path <- "path/to/tinyllama.gguf"
if (file.exists(model_path)) {
  ctx <- edge_load_model(
    model_path,
    n_ctx = config$n_ctx,
    n_gpu_layers = config$n_gpu_layers
  )

  result <- edge_completion(
    ctx,
    prompt = "Hello",
    n_predict = config$recommended_n_predict,
    temperature = config$recommended_temperature
  )

  edge_free_model(ctx)
}
}

Run the code above in your browser using DataLab