Requires the future.apply package
find_permuted_perf_metric(
test_data,
trained_model,
outcome_colname,
perf_metric_function,
perf_metric_name,
class_probs,
feat,
test_perf_value,
nperms = 100,
alpha = 0.05,
progbar = NULL
)vector of mean permuted performance and mean difference between test and permuted performance (test minus permuted performance)
Held out test data: dataframe of outcome and features.
Trained model from caret::train().
Column name as a string of the outcome variable
(default NULL; the first column will be chosen automatically).
Function to calculate the performance metric to
be used for cross-validation and test performance. Some functions are
provided by caret (see caret::defaultSummary()). Defaults: binary
classification = twoClassSummary, multi-class classification =
multiClassSummary, regression = defaultSummary.
The column name from the output of the function
provided to perf_metric_function that is to be used as the performance
metric. Defaults: binary classification = "ROC", multi-class
classification = "logLoss", regression = "RMSE".
Whether to use class probabilities (TRUE for categorical outcomes, FALSE for numeric outcomes).
feature or group of correlated features to permute.
value of the true performance metric on the held-out test data.
number of permutations to perform (default: 100).
alpha level for the confidence interval
(default: 0.05 to obtain a 95% confidence interval)
optional progress bar (default: NULL)
Begüm Topçuoğlu, topcuoglu.begum@gmail.com
Zena Lapp, zenalapp@umich.edu
Kelly Sovacool, sovacool@umich.edu