Learn R Programming

gradDescent (version 2.0)

adagrad: Adaptive Subgradient learn

Description

This is internal function of learning stage that implement Adaptive Subgradient method to create model.

Usage

adagrad(inputData, outputData, list)

Arguments

inputData
a matrix of input data that created inside gradDescent.learn function.
outputData
a matrix of output data that created inside gradDescent.learn function.
list
a list of parameter that customize the learn.
  • rowLength: a integer of data length (row).
  • theta: a matrix of float number of current model value.
  • alpha : a float value for learning rate.
  • momentum : a float value to give a constant speed to learning process..
  • smooth : a float value to handle zero division issue in certain learning method.
  • stochastic : a boolean value to enable stochastic, which mean to select one random value in data train, instead process all data train.
  • accelerate : a boolean value to enable accelerate in the learning with momentum.
  • maxIter : Adaptive Moment Estimation method to calculate gradient.

Value

a matrix of theta

References

J. Duchi, E. Hazan, Y. Singer. (2011). Adaptive Subgradient Methods for Online Learning and Stochastic Optimization Journal of Machine Learning Research 12, 2121-2159.