edgemodelr (version 0.1.5)
Local Large Language Model Inference Engine
Description
Enables R users to run large language models locally using 'GGUF' model files
and the 'llama.cpp' inference engine. Provides a complete R interface for loading models,
generating text completions, and streaming responses in real-time. Supports local
inference without requiring cloud APIs or internet connectivity, ensuring complete
data privacy and control. Based on the 'llama.cpp' project by Georgi Gerganov (2023) .