accuracy

0th

Percentile

Prediction Accuracy from Stability Assessment Results

Function to compute the prediction accuracy from an object of class "stablelearner" or "stablelearnerList" as a parallel to the similarity values estimated by stability in each iteration of the stability assessment procedure.

Keywords
resampling, similarity
Usage
accuracy(x, measure = "kappa", na.action = na.exclude, 
	   applyfun = NULL, cores = NULL)
Arguments
x

an object of class "stablelearner" or "stablelearnerList".

measure

a character string (or a vector of character strings). Name(s) of the measure(s) used to compute accuracy. Currently implemented measures are "diag" = percentage of observations on the main diagonal of a confusion matrix, "kappa" = "diag" corrected for agreement by chance (default), "rand" = Rand index, and "crand" = Rand index corrected for agreemend by chance (see also classAgreement).

na.action

a function which indicates what should happen to the predictions of the results containing NAs. The default function is na.exclude.

applyfun

a lapply-like function. The default is to use lapply unless cores is specified in which case mclapply is used (for multicore computations on platforms that support these).

cores

integer. The number of cores to use in multicore computations using mclapply (see above).

Details

This function can be used to compute prediction accuracy after the stability was estimated using stability.

Value

A matrix of size 2*B times length(measure) containing prediction accuracy values of the learners trained during the stability assessment procedure.

See Also

stability

Aliases
  • accuracy
Examples
# NOT RUN {
# }
# NOT RUN {
library("partykit")
res <- ctree(Species ~ ., data = iris)
stab <- stability(res)
accuracy(stab)
# }
# NOT RUN {
# }
Documentation reproduced from package stablelearner, version 0.1-2, License: GPL-2 | GPL-3

Community examples

Looks like there are no examples yet.