Learn R Programming

MetaClean (version 1.0.0)

runCrossValidation: Run Cross-Validation for A List of Algoirthms with Peak Quality Metric Feature Sets

Description

Wrapper function for running cross-validation on up to 8 classification algorithms using one or more of the three available metrics sets.

Usage

runCrossValidation(
  trainData,
  k,
  repNum,
  rand.seed = NULL,
  models = "all",
  metricSet = "M11"
)

Arguments

trainData

dataframe. Rows should correspond to peaks, columns should include peak quality metrics and class labels only.

k

integer. Number of folds to be used in cross-validation

repNum

integer. Number of cross-validation rounds to perform

rand.seed

integer. State in which to set the random number generator

models

character string or vector. Specifies the classification algorithms to be trained from the eight available: DecisionTree, LogisiticRegression, NaiveBayes, RandomForest, SVM_Linear, AdaBoost, NeuralNetwork, and ModelAveragedNeuralNetwork. "all" specifies the use of all models. Default is "all".

metricSet

The metric set(s) to be run with the selected model(s). Select from the following: M4, M7, and M11. Use c() to select multiple metrics. "all" specifics the use of all metrics. Default is "M11".

Value

a list of up to 8 trained models

Examples

Run this code
# NOT RUN {
# train classification algorithms
# }
# NOT RUN {
models <- trainClassifiers(trainData=pqMetrics_development, k=5, repNum=10,
 rand.seed = 453, models="DecisionTree")
# }
# NOT RUN {
# }

Run the code above in your browser using DataLab