Learn R Programming

edgemodelr (version 0.2.0)

test_ollama_model_compatibility: Test if an Ollama model blob can be used with edgemodelr

Description

This function tries to load an Ollama GGUF blob with edgemodelr using a minimal configuration and then runs a very short completion. It is intended to quickly detect common incompatibilities (unsupported architectures, invalid or unsupported GGUF files, or models that cannot run inference) before you attempt to use the model in a longer session.

Usage

test_ollama_model_compatibility(model_path, verbose = FALSE)

Value

Logical: TRUE if the model loads and can run a short completion successfully, FALSE otherwise.

Arguments

model_path

Path to the Ollama blob file (a GGUF file, typically named by its SHA-256 hash inside the Ollama models/blobs directory).

verbose

If TRUE, print human-readable diagnostics for models that fail the compatibility checks.

Details

A model is considered compatible if:

  • edge_load_model() succeeds with a small context size (n_ctx = 256) and CPU-only execution (n_gpu_layers = 0),

  • the resulting model context passes is_valid_model(),

  • and a minimal call to edge_completion() (1 token) returns without error.

When verbose = TRUE, this function classifies common failure modes: unsupported model architecture, invalid GGUF file, unsupported GGUF version, or a generic error (first 80 characters reported with truncation indicator).

Examples

Run this code
if (FALSE) {
# Test an individual Ollama blob
# is_ok <- test_ollama_model_compatibility("/path/to/blob", verbose = TRUE)
#
# This function is also used internally by edge_find_ollama_models()
# when test_compatibility = TRUE.
}

Run the code above in your browser using DataLab