Enables R users to run large language models locally using 'GGUF' model files and the 'llama.cpp' inference engine. Provides a complete R interface for loading models, generating text completions, and streaming responses in real-time. Supports local inference without requiring cloud APIs or internet connectivity, ensuring complete data privacy and control. Based on the 'llama.cpp' project by Georgi Gerganov (2023) https://github.com/ggml-org/llama.cpp.
edge_load_modelLoad a GGUF model file
edge_completionGenerate text completions
edge_stream_completionStream text generation in real-time
edge_chat_streamInteractive chat interface
edge_quick_setupOne-line model download and setup
edge_free_modelRelease model memory
edge_list_modelsList available pre-configured models
edge_download_modelDownload models from Hugging Face
Basic usage workflow:
Download a model: setup <- edge_quick_setup("TinyLlama-1.1B")
Generate text: edge_completion(setup$context, "Hello")
Clean up: edge_free_model(setup$context)
For interactive chat:
setup <- edge_quick_setup("TinyLlama-1.1B")
edge_chat_stream(setup$context)
See comprehensive examples in the package:
system.file("examples/getting_started_example.R", package = "edgemodelr")
system.file("examples/data_science_assistant_example.R", package = "edgemodelr")
system.file("examples/text_analysis_example.R", package = "edgemodelr")
system.file("examples/creative_writing_example.R", package = "edgemodelr")
system.file("examples/advanced_usage_example.R", package = "edgemodelr")
Run examples:
# Getting started guide
source(system.file("examples/getting_started_example.R", package = "edgemodelr"))# Data science assistant
source(system.file("examples/data_science_assistant_example.R", package = "edgemodelr"))
C++17 compatible compiler
Sufficient RAM for model size (1GB+ for small models, 8GB+ for 7B models)
GGUF model files (downloaded automatically or manually)
This package processes all data locally on your machine. No data is sent to external servers, ensuring complete privacy and control over your text generation workflows.
Pawan Rama Mali prm@outlook.in
The edgemodelr package provides R bindings for Local Large Language Model Inference Engine using llama.cpp and GGUF model files. This enables completely private, on-device text generation without requiring cloud APIs or internet connectivity.
Package repository: https://github.com/PawanRamaMali/edgemodelr
llama.cpp project: https://github.com/ggml-org/llama.cpp
GGUF format: https://github.com/ggml-org/ggml