
Last chance! 50% off unlimited learning
Sale ends in
msgl.subsampling(x, classes,
sampleWeights = rep(1/length(classes), length(classes)),
grouping = NULL, groupWeights = NULL,
parameterWeights = NULL, alpha = 0.5,
standardize = TRUE, lambda, training, test,
sparse.data = FALSE, max.threads = 2L,
algorithm.config = sgl.standard.config)
groupWeights
= NULL
default weights will be used. Default weights are
0 for the intercept and $$\sqrt{K\cdtraini
x
will be treated as
sparse, if x
is a sparse matrix it will be treated
as sparse by default.length(test)
with each element of the list another
list of length length(lambda)
one item for each
lambda value, with each item a matrix of size $K
\times N$ containing the linear predictors.length(test)
with each element of the list
another list of length length(lambda)
one item for
each lambda value, with each item a matrix of size $K
\times N$ containing the probabilities.length(test)
with each element of the list a
matrix of size $N \times d$ with
$d=$length(lambda)
.data(SimData)
x <- sim.data$x
classes <- sim.data$classes
lambda <- msgl.lambda.seq(x, classes, alpha = .5, d = 100L, lambda.min = 0.03)
test <- replicate(5, sample(1:length(classes))[1:20], simplify = FALSE)
train <- lapply(test, function(s) (1:length(classes))[-s])
fit.sub <- msgl.subsampling(x, classes, alpha = .5, lambda = lambda,
training = train, test = test)
# Missclassification count of second subsample
colSums(fit.sub$classes[[2]] != classes[test[[2]]])
Run the code above in your browser using DataLab