Learn R Programming

snnR (version 1.0)

subgradient: subgradient

Description

This function obtains the minimum-norm subgradient of the approximated square error with L1 norm penalty or L2 norm penalty.

Usage

subgradient(w, X, y, nHidden, lambda, lambda2)

Arguments

w

(numeric, \(n\)) weights and biases.

X

(numeric, \(n \times p\)) incidence matrix.

y

(numeric, \(n\)) the response data-vector.

nHidden

(positive integer, \(1\times h\)) matrix, h indicates the number of hidden-layers and nHidden[1,h] indicates the neurons of the h-th hidden-layer.

lambda

(numeric,\(n\)) lagrange multiplier for L1 norm penalty on parameters.

lambda2

(numeric,\(n\)) lagrange multiplier for L2 norm penalty on parameters.

Value

A vector with the subgradient values.

Details

It is based on choosing a subgradient with minimum norm as a steepest descent direction and taking a step resembling Newton iteration in this direction with a Hessian approximation.