Last chance! 50% off unlimited learning
Sale ends in
vda.le
vda.r(x, y, lambda)
vda(x, y, lambda)
cv.vda.r
, which uses K-fold cross validation to determine the optimal value.
x
with an intercept vector added as the first column. All entries in the first column should equal 1.
y
. All elements should be integers between 1 and classes
.
lambda
that was used during analysis.
k-1
outcome categories. The coefficient matrix is used for classifying new cases.
Comparisons on real and simulated data suggest that the MM algorithm for VDA is competitive in statistical accuracy and computational speed with the best currently available algorithms for discriminant analysis, such as linear discriminant analysis (LDA), quadratic discriminant analysis (QDA), k-nearest neighbor, one-vs-rest binary support vector machines, multicategory support vector machines, classification and regression tree (CART), and random forest prediction.
lambda
, refer to cv.vda.r
.For high-dimensional setting and conduct variable selection, please refer to vda.le
.
# load zoo data
# column 1 is name, columns 2:17 are features, column 18 is class
data(zoo)
#matrix containing all predictor vectors
x <- zoo[,2:17]
#outcome class vector
y <- zoo[,18]
#run VDA
out <- vda.r(x, y)
#Predict five cases based on VDA
fivecases <- matrix(0,5,16)
fivecases[1,] <- c(1,0,0,1,0,0,0,1,1,1,0,0,4,0,1,0)
fivecases[2,] <- c(1,0,0,1,0,0,1,1,1,1,0,0,4,1,0,1)
fivecases[3,] <- c(0,1,1,0,1,0,0,0,1,1,0,0,2,1,1,0)
fivecases[4,] <- c(0,0,1,0,0,1,1,1,1,0,0,1,0,1,0,0)
fivecases[5,] <- c(0,0,1,0,0,0,1,0,0,0,0,0,0,0,0,0)
predict(out, fivecases)
Run the code above in your browser using DataLab