Learn R Programming

ggmlR (version 0.6.1)

ag_gradcheck: Numerical gradient check (like torch.autograd.gradcheck)

Description

Compares analytical gradients (from backward()) with finite-difference numerical gradients for all input tensors with requires_grad = TRUE.

Usage

ag_gradcheck(
  fn,
  inputs,
  eps = 1e-05,
  atol = 1e-04,
  verbose = FALSE,
  quiet = FALSE
)

Value

Invisibly TRUE if all gradients match, FALSE otherwise. When quiet = FALSE (default), prints a summary report.

Arguments

fn

A function that takes a list of ag_tensor inputs and returns a scalar ag_tensor loss (must be used inside with_grad_tape).

inputs

Named list of ag_tensor objects. Only those with requires_grad = TRUE are checked.

eps

Finite-difference step size (default 1e-5).

atol

Absolute tolerance for pass/fail (default 1e-4).

verbose

Print per-element comparison (default FALSE).

quiet

Suppress per-parameter and overall status lines (default FALSE). Useful when calling from testthat tests to keep output clean.

Examples

Run this code
# \donttest{
W <- ag_param(matrix(runif(6), 2, 3))
x <- ag_tensor(matrix(runif(3), 3, 1))
ag_gradcheck(
  fn = function(ins) ag_mse_loss(ag_relu(ag_matmul(ins$W, ins$x)),
                                  matrix(0, 2, 1)),
  inputs = list(W = W, x = x)
)
# }

Run the code above in your browser using DataLab