Learn R Programming

ssc (version 1.0)

selfTraining: Train the Self-training model

Description

Builds and trains a model to predict the label of instances, according to Self-training algorithm.

Usage

selfTraining(x, y, bclassif = bClassifOneNN(), dist = "matrix", min.amount = ceiling(length(which(is.na(y))) * 0.3), max.iter = 50)

Arguments

x
A object that can be coerced as matrix. This object have various interpretations depending on the value set in dist argument. See dist argument.
y
A vector with the labels of training instances. In this vector the unlabeled instances are specified with the value NA.
bclassif
Base classifier specification. Default is bClassifOneNN(). For defining new base classifiers see bClassif.
dist
Distance information. Valid options are:
  • "matrix": this string indicates that x is a distance matrix.
  • string: the name of a distance method available in proxy package. In this case x is interpreted as a matrix of instances.
  • function: a function defined by the user that computes the distance between two vectors. This function is called passing the vectors in the firsts two arguments. If the function have others arguments, those arguments must be have default values. In this case x is interpreted as a matrix of instances.
min.amount
Minimum number of unlabeled instances to stop the training process. When the size of unlabeled training instances reaches this number the self-labeling process is stopped. Default is 0.3 * .
max.iter
Maximum number of iterations to execute in the self-labeling process. Default is 50.

Value

The trained model stored in a list with the following named values:

References

David Yarowsky. Unsupervised word sense disambiguation rivaling supervised methods. In Proceedings of the 33rd annual meeting on Association for Computational Linguistics, pages 189–196. Association for Computational Linguistics, 1995.

Examples

Run this code
# This example is part of SelfTraining demo.
# Use demo(SelfTraining) to see all the examples.

## Load Wine data set
data(wine)

x <- wine[, -14] # instances without classes
y <- wine[, 14] # the classes
x <- scale(x) # scale the attributes

## Prepare data
set.seed(20)
# Use 50% of instances for training
tra.idx <- sample(x = length(y), size = ceiling(length(y) * 0.5))
xtrain <- x[tra.idx,] # training instances
ytrain <- y[tra.idx]  # classes of training instances
# Use 70% of train instances as unlabeled set
tra.na.idx <- sample(x = length(tra.idx), size = ceiling(length(tra.idx) * 0.7))
ytrain[tra.na.idx] <- NA # remove class information of unlabeled instances

# Use the other 50% of instances for inductive testing
tst.idx <- setdiff(1:length(y), tra.idx)
xitest <- x[tst.idx,] # testing instances
yitest <- y[tst.idx] # classes of testing instances

## Example: Using the Euclidean distance in proxy package.
m <- selfTraining(xtrain, ytrain, dist = "Euclidean")
pred <- predict(m, xitest)
caret::confusionMatrix(table(pred, yitest))

Run the code above in your browser using DataLab