Build and train Artifical Neural Network of any depth in a single line code. Choose the hyperparameters to improve the accuracy or generalisation of model.
deepnet(
x,
y,
hiddenLayerUnits = c(2, 2),
activation = c("sigmoid", "relu"),
reluLeak = 0,
modelType = c("regress"),
iterations = 500,
eta = 10^-2,
seed = 2,
gradientClip = 0.8,
regularisePar = 0,
optimiser = "adam",
parMomentum = 0.9,
inputSizeImpact = 1,
parRmsPropZeroAdjust = 10^-8,
parRmsProp = 0.9999,
printItrSize = 100,
showProgress = TRUE,
stopError = 0.01,
miniBatchSize = NA,
useBatchProgress = FALSE,
ignoreNAerror = FALSE,
normalise = TRUE
)
a data frame with input variables
a data frame with ouptut variable
a numeric vector, length of vector indicates number of hidden layers and each element in vector indicates corresponding hidden units Eg: c(6,4) for two layers, one with 6 hiiden units and other with 4 hidden units. Note: Output layer is automatically created.
one of "sigmoid","relu","sin","cos","none". The default is "sigmoid". Choose a activation per hidden layer
numeric. Applicable when activation is "relu". Specify value between 0 any number close to zero below 1. Eg: 0.01,0.001 etc
one of "regress","binary","multiClass". "regress" for regression will create a linear single unit output layer. "binary" will create a single unit sigmoid activated layer. "multiClass" will create layer with units corresponding to number of output classes with softmax activation.
integer. This indicates number of iteratios or epochs in backpropagtion .The default value is 500.
numeric.Hyperparameter,sets the Learning rate for backpropagation. Eta determines the convergence ability and speed of convergence.
numeric. Set seed with this parameter. Incase of sin activation sometimes changing seed can yeild better results. Default is 2
numeric. Hyperparameter numeric value which limits gradient size for weight update operation in backpropagation. Default is 0.8 . It can take any postive value.
numeric. L2 Regularisation Parameter .
one of "gradientDescent","momentum","rmsProp","adam". Default value "adam"
numeric. Applicable for optimiser "mometum" and "adam"
numeric. Adjusts the gradient size by factor of percentage of rows in input. For very small data set setting this to 0 could yeild faster result. Default is 1.
numeric. Applicable for optimiser "rmsProp" and "adam"
numeric.Applicable for optimiser "rmsProp" and "adam"
numeric. Number of iterations after which progress message should be shown. Default value 100 and for iterations below 100 atleast 5 messages will be seen
logical. True will show progress and F will not show progress
Numeric. Rmse at which iterations can be stopped. Default is 0.01, can be set as NA in case all iterations needs to run.
integer. Set the mini batch size for mini batch gradient
logical. Applicable for miniBatch , setting T will use show rmse in Batch and F will show error on full dataset. For large dataset set T
logical. Set T if iteration needs to be stopped when predictions become NA
logical. Set F if normalisation not required.Default T
returns model object which can be passed into predict.deepnet
# NOT RUN {
require(deepdive)
x <- data.frame(x1 = runif(10),x2 = runif(10))
y<- data.frame(y=20*x$x1 +30*x$x2+10)
#train
modelnet<-deepnet(x,y,c(2,2),
activation = c('relu',"sigmoid"),
reluLeak = 0.01,
modelType = "regress",
iterations =5,
eta=0.8,
optimiser="adam")
#predict
predDeepNet<-predict.deepnet(modelnet,newData=x)
#evaluate
sqrt(mean((predDeepNet$ypred-y$y)^2))
# }
Run the code above in your browser using DataLab