Validation of new model fitting approaches requires the proper use of resampling techniques for prediction error estimation. Especially in high-dimensional data situations the computational demand might be huge. peperr
accelerates computation through automatically parallelisation of the resampling procedure, if a compute cluster is available. A noticeable speed-up is reached even when using a dual-core processor.
Resampling based prediction error estimation requires for each split in training and test data the following steps: a) selection of model complexity (if desired), using the training data set, b) fitting the model with the selected (or a given) complexity on the training set and c) measurement of prediction error on the corresponding test set.
Functions for fitting the model, determination of model complexity, if required by the fitting procedure, and aggregating the prediction error are passed as arguments fit.fun
, complexity
and aggregation.fun
. Already available functions are
for model fit:
fit.CoxBoost
, fit.coxph
, fit.LASSO
, fit.rsf_mtry
to determine complexity:
complexity.mincv.CoxBoost
, complexity.ipec.CoxBoost
, complexity.LASSO
, complexity.ipec.rsf_mtry
to aggregate prediction error:
aggregation.pmpec
, aggregation.brier
, aggregation.misclass
Function peperr
is especially designed for evaluation of newly developed model fitting routines. For that, own routines can be passed as arguments to the peperr
call. They are incorporated as follows (also compare existing functions, as named above):
Model fitting techniques, which require selection of one or more complexity parameters, often provide routines based on cross-validation or similar to determine this parameter. If this routine is already at hand, the complexity function needed for the peperr
call is not more than a wrapper around that, which consists of providing the data in the required way, calling the routine and return the selected complexity value(s).
For a given model fitting routine the fitting function, which is passed to the peperr
call as argument fit.fun
, is not more than a wrapper around that. Explicitly, response and matrix of covariates have to be transformed to the required form, if necessary, the routine is called with the passed complexity value, if required, and the fitted prediction model is returned.
Prediction error is estimated using a fitted model and a data set, by any kind of comparison of the true and the predicted response values. In case of survival response, apparent error (type apparent
), which means that the prediction error is estimated in the same data set as used for model fitting, and no-information error (type noinf
), which calculates the prediction error in permuted data, have to be provided. Note that the aggregation function returns the error with an additional attribute called addattr
. The evaluation time points have to be stored there to allow later access.
In case of survival response, the user may additionally provide a function for partial log likelihood calculation, if he uses an own function for model fit, called PLL.class
. If prediction error curves are used for aggregation (aggregation.pmpec
), a predictProb method has to be provided, i.e. for each model of class class
predictProb.class
, see there.
Concerning parallelisation, there are three possibilities to run peperr
:
Start R on commandline with sfCluster and preferred options, for example number of cpus. Leave the three arguments parallel
, clustertype
and nodes
unchanged.
Use any other cluster solution supported by snowfall, i.e. LAM/MPI, socket, PVM, NWS (set argument clustertype
). Argument parallel
has to be set to TRUE and number of cpus can be chosen by argument nodes
)
If no cluster is used, R works sequentially. Keep parallel=NULL
. No parallelisation takes place and therefore no speed up can be obtained.
In general, if parallel=NULL
, all information concerning the cluster set-up is taken from commandline, else, it can be specified using the three arguments parallel
, clustertype
, nodes
, and, if necessary, clusterhosts
.
sfCluster is a Unix tool for flexible and comfortable managment of parallel R processes. However, peperr is usable with any other cluster solution supported by snowfall, i.e. sfCluster has not to be installed to use package peperr. Note that this may require cluster handling by the user, e.g. manually shut down with 'lamhalt' on commandline for type="MPI"
. But, using a socket cluster (argument parallel=TRUE
and clustertype="SOCK"
), does not require any extra installation.
Note that the run time cannot speed up anymore if the number of nodes is chosen higher than the number of passed training/test samples plus one, as parallelisation takes place in the resampling procedure and one additional run is used for computation on the full sample.
If not running in sequential mode, a specified number of R processes called nodes is spawned for parallel execution of the resampling procedure (see above). This requires to provide all variables, functions and libraries necessary for computation on each of these R processes, so explicitly all variables, functions and libraries required by the, potentially user-defined, functions fit.fun
, complexity
and aggregation.fun
. The simplest possibility is to load the whole content of the global environment on each node and all loaded libraries. This is done by setting argument load.all=TRUE
. This is not the default, as a huge amount of data is potentially loaded to each node unnecessarily. Function extract.fun
is provided to extract the functions and libraries needed, automatically called at each call of function peperr
. Note that all required libraries have to be located in the standard library search path (obtained by .libPaths()
). Another alternative is to load required data manually on the slaves, using snowfall functions sfLibrary
, sfExport
and sfExportAll
. Then, argument noclusterstart
has to be switched to TRUE. Additionally, argument load.list
could be set to NULL, to avoid potentially overwriting of functions and variables loaded to the cluster nodes automatically.
Note that a set.seed
call before calling function peperr
is not sufficient to allow reproducibility of results when running in parallel mode, as the slave R processes are not affected as they are own R instances. peperr
provides two possibilities to make results reproducible:
Use RNG="RNGstream"
or RNG="SPRNG"
. Independent parallel random number streams are initialized on the cluster nodes, using function sfClusterSetupRNG
of package snowfall. A seed can be specified using argument seed
, else the default values are taken. A set.seed
call on the master is required additionally and argument lb=FALSE
, see below.
If RNG="fixed"
, a seed has to be specified. This can be either an integer or a vector of length number of samples +2. In the second case, the first entry is used for the main R process, the next number of samples ones for each sample run (in parallel execution mode on slave R processes) and the last one for computation on full sample (as well on slave R process in parallel execution mode). Passing integer x is equivalent to passing vector x+(0:(number of samples+1))
. This procedure allows reproducibility in any case, i.e. also if the number of parallel processes changes as well as in sequential execution.
Load balancing (argument lb
) means, that a slave gets a new job immediately after the previous is finished. This speeds up computation, but may change the order of jobs. Due to that, results are only reproducible, if RNG="fixed"
is used.