Separate the dataset to training and test set, and obtain the mean absolute error (for linear regression) or accuracy (for logistic regression).
xvalPoly(xy, maxDeg, maxInteractDeg = maxDeg, use = "lm", pcaMethod =
FALSE, pcaPortion = 0.9, glmMethod = "all",nHoldout=10000,yCol=NULL)
xvalKf(xy, nHoldout = min(10000, round(0.2 * nrow(xy))), yCol = NULL,
units, activation, dropout)
xvalDnet(x,y,hidden,output="\"sigm\"",numepochs=3,pcaMethod = FALSE,
pcaPortion=0.9,scaleXMat=TRUE,nHoldout=min(10000,
round(0.2*nrow(x))))
Data matrix or dataframe with response variable in the last column.
Max degree for power terms.
Max degree of interaction terms.
Can be "lm" for using linear regreesion, and "glm" for using logistic regression.
Proportion of data to be used as training set.
If TRUE, use PCA.
If pcaMethod is TRUE, use components up to this proportion of total variance.
For classification problems. If there are more than two classes, this can be "all" for All-vs-All method, or "one" for One-vs-All method.
Size of test test.
Vector specifying number of units per layer. There is one element for each hidden layer, then an element for the dummy layer that produces the predictions; specify this as NA.
Vector specifying the activiation functions. One
element for each hidden layer, then one function producing the
final prediction, e.g. 'linear'
for regression and
'softmax'
for classification.
Vector specifying number of units per layer.
Numerical matrix of predictor values.
Numerical vector (regression case) or matrix (classification
case) of response values. In the classification, if there are
q
classes then the matrix will have q
columns,
exactly one 1 per row.
The return value of xvalPoly
is an R vector of mean absolute
error (for lm
) or probability of correct classification
(for glm
). The i-th element of the vector is for degree i.
The xvalPoly
function divides the data to training and test sets,
and use polyFit
to generate models using training data and use
polyFit.predict
to generate results on test set, and compare the
results. The xvalKf
does the same for kerasformula
neural networks package.
# NOT RUN {
y <- mtcars[,1]
data <- cbind(mtcars[,-1], y) # make y column the last column
pf1 <- xvalPoly(data,2,2,"lm",0.8,FALSE)
pf2 <- xvalPoly(data,5,3,"lm",0.8,TRUE,0.9)
# }
Run the code above in your browser using DataLab