Learn R Programming

edgemodelr (version 0.2.0)

edgemodelr-package: edgemodelr: Local Large Language Model Inference Engine

Description

Enables R users to run large language models locally using 'GGUF' model files and the 'llama.cpp' inference engine. Provides a complete R interface for loading models, generating text completions, and streaming responses in real-time. Supports local inference without requiring cloud APIs or internet connectivity, ensuring complete data privacy and control. Based on the 'llama.cpp' project by Georgi Gerganov (2023) https://github.com/ggml-org/llama.cpp.

Arguments

Main Functions

edge_load_model

Load a GGUF model file

edge_completion

Generate text completions

edge_stream_completion

Stream text generation in real-time

edge_chat_stream

Interactive chat interface

edge_quick_setup

One-line model download and setup

edge_free_model

Release model memory

Model Management

edge_list_models

List available pre-configured models

edge_download_model

Download models from Hugging Face

Getting Started

Basic usage workflow:

  1. Download a model: setup <- edge_quick_setup("TinyLlama-1.1B")

  2. Generate text: edge_completion(setup$context, "Hello")

  3. Clean up: edge_free_model(setup$context)

For interactive chat:


setup <- edge_quick_setup("TinyLlama-1.1B")
edge_chat_stream(setup$context)

Examples

See comprehensive examples in the package:

  • system.file("examples/getting_started_example.R", package = "edgemodelr")

  • system.file("examples/data_science_assistant_example.R", package = "edgemodelr")

  • system.file("examples/text_analysis_example.R", package = "edgemodelr")

  • system.file("examples/creative_writing_example.R", package = "edgemodelr")

  • system.file("examples/advanced_usage_example.R", package = "edgemodelr")

Run examples:


# Getting started guide
source(system.file("examples/getting_started_example.R", package = "edgemodelr"))

# Data science assistant source(system.file("examples/data_science_assistant_example.R", package = "edgemodelr"))

System Requirements

  • C++17 compatible compiler

  • Sufficient RAM for model size (1GB+ for small models, 8GB+ for 7B models)

  • GGUF model files (downloaded automatically or manually)

Privacy and Security

This package processes all data locally on your machine. No data is sent to external servers, ensuring complete privacy and control over your text generation workflows.

Author

Pawan Rama Mali prm@outlook.in

Details

The edgemodelr package provides R bindings for Local Large Language Model Inference Engine using llama.cpp and GGUF model files. This enables completely private, on-device text generation without requiring cloud APIs or internet connectivity.

See Also