Running Local LLMs with 'llama.cpp' Backend
Description
The 'localLLM' package provides R bindings to the 'llama.cpp' library for running large language models.
The package uses a lightweight architecture where the C++ backend library is downloaded
at runtime rather than bundled with the package.
Package features include text generation, reproducible generation, and parallel inference.