The function begins by validating the names of the models provided in the lst_models
list and
ensures that there are at least two events present in the dataset. It then checks for the
availability of the specified evaluation method and ensures that the test times are consistent
with the training times of the models.
The core of the function revolves around the evaluation of each model. Depending on the user's
preference, the evaluations can be executed in parallel, which can significantly expedite the
process, especially when dealing with a large number of models. The function employs various
evaluation methods, as specified by the pred.method
parameter, to compute the AUC values. These
methods include but are not limited to "risksetROC", "survivalROC", and "cenROC".
The metric Integrative Brier Score is computed by survcomp::sbrier.score2proba() function.
Post-evaluation, the function collates the results, including training times, AIC values, c-index,
Brier scores, and AUC values for each time point. The results are then transformed into a
structured data frame, making it conducive for further analysis and visualization. It's worth
noting that potential issues in AUC computation, often arising from sparse samples, are flagged
to the user for further inspection.