Learn R Programming

ggmlR (version 0.6.1)

ggml_fit.ggml_functional_model: Train a Model (dispatcher)

Description

Dispatcher: if the first argument is a ggml_sequential_model, delegates to the Keras-style high-level API (ggml_fit_sequential); otherwise delegates to the low-level optimizer loop (ggml_fit_opt).

Usage

# S3 method for ggml_functional_model
ggml_fit(
  model,
  x,
  y,
  epochs = 1L,
  batch_size = 32L,
  validation_split = 0,
  validation_data = NULL,
  verbose = 1L,
  ...
)

ggml_fit(model, ...)

# S3 method for ggml_sequential_model ggml_fit(model, ...)

# S3 method for default ggml_fit(model, ...)

Value

For Sequential models: the trained model (invisibly). For the low-level API: a data frame with columns

epoch, train_loss, train_accuracy,

val_loss, val_accuracy.

Arguments

model

A compiled model object.

x

Training data (matrix or array).

y

Training labels (matrix, one-hot encoded).

epochs

Number of training epochs (default: 1).

batch_size

Batch size (default: 32).

validation_split

Fraction of data for validation (default: 0).

validation_data

Optional list(x_val, y_val). Overrides validation_split.

verbose

0 = silent, 1 = progress (default: 1).

...

Arguments passed to the appropriate implementation.

Details

Keras-style (Sequential model):

model

A compiled ggml_sequential_model

x

Training data (matrix or array)

y

Training labels (matrix, one-hot encoded for classification)

epochs

Number of training epochs (default: 1)

batch_size

Batch size (default: 32)

validation_split

Fraction of data for validation (default: 0)

validation_data

Optional list(x_val, y_val) for validation. Overrides validation_split.

class_weight

Named vector of weights per class, e.g. c("0"=1, "1"=10). Cannot be used with sample_weight.

sample_weight

Numeric vector of per-sample weights (length = nrow(x)). Cannot be used with class_weight.

verbose

0 = silent, 1 = progress (default: 1)

Low-level (optimizer loop):

sched

Backend scheduler

ctx_compute

Compute context

inputs

Input tensor

outputs

Output tensor

dataset

Dataset from ggml_opt_dataset_init()

loss_type

Loss type (default: MSE)

optimizer

Optimizer type (default: AdamW)

nepoch

Number of epochs (default: 10)

nbatch_logical

Logical batch size (default: 32)

val_split

Validation fraction (default: 0)

callbacks

List of callback objects

silent

Suppress output (default: FALSE)

See Also

ggml_fit_opt, ggml_compile

Examples

Run this code
# \donttest{
n <- 128
x <- matrix(runif(n * 4), nrow = n, ncol = 4)
y <- matrix(0, nrow = n, ncol = 2)
for (i in seq_len(n)) { y[i, if (sum(x[i,]) > 2) 1L else 2L] <- 1 }

model <- ggml_model_sequential() |>
  ggml_layer_dense(8, activation = "relu") |>
  ggml_layer_dense(2, activation = "softmax")
model$input_shape <- 4L
model <- ggml_compile(model, optimizer = "adam",
                      loss = "categorical_crossentropy")

# Basic training
model <- ggml_fit(model, x, y, epochs = 5, batch_size = 32, verbose = 0)

# With validation_data
x_val <- matrix(runif(32 * 4), nrow = 32, ncol = 4)
y_val <- matrix(0, nrow = 32, ncol = 2)
for (i in seq_len(32)) { y_val[i, if (sum(x_val[i,]) > 2) 1L else 2L] <- 1 }
model <- ggml_fit(model, x, y, epochs = 3, batch_size = 32,
                  validation_data = list(x_val, y_val), verbose = 0)

# With class_weight (useful for imbalanced classes)
model <- ggml_fit(model, x, y, epochs = 3, batch_size = 32,
                  class_weight = c("0" = 1, "1" = 2), verbose = 0)

# With sample_weight
sw <- runif(n, 0.5, 1.5)
model <- ggml_fit(model, x, y, epochs = 3, batch_size = 32,
                  sample_weight = sw, verbose = 0)
# }

Run the code above in your browser using DataLab