If a subset of samples are selected randomly, the navigate of positive classes might be too sparse or even empty. This function will repeat sampling until the classes are appropriate in this sense.
random.subset(F_, L_, gamma, persistence = 1000, minimum.class.size=2, replace)
The feature matrix, each column is a feature.
The vector of labels named according to the rows of F.
A value in range 0-1 that determines the relative size of sample subsets.
Maximum number of tries for randomly choosing.samples, If we try this many times and the obtained labels are all the same, we give up (maybe the whole labels are the same) with the error message: " Not enough variation in the labels...".
A lower bound on the number of samples in each class.
If TRUE, sampling is done by replacement.
Returns a list of:
The sampled feature matrix, each column is a feature after ignoring the redundant ones.
The vector of labels named according to the rows of X_.
The names of the rows of F_ which do not appear in X_, later on can be used for validation.
The function also returns a refined feature matrix by ignoring too sparse features after sampling.
"Statistical Analysis of Overfitting Features", manuscript in preparation.
FeaLect
, train.doctor
, doctor.validate
,
random.subset
, compute.balanced
,compute.logistic.score
,
ignore.redundant
, input.check.FeaLect
# NOT RUN {
library(FeaLect)
data(mcl_sll)
F <- as.matrix(mcl_sll[ ,-1]) # The Feature matrix
L <- as.numeric(mcl_sll[ ,1]) # The labels
names(L) <- rownames(F)
message(dim(F)[1], " samples and ",dim(F)[2], " features.")
XY <- random.subset(F_=F, L_=L, gamma=3/4,replace=TRUE)
XY$remainder.samples
# }
Run the code above in your browser using DataLab