Creates a DALEX explainer and computes permutation based variable importance, partial dependence (model profiles) and Shapley values.
explain_dalex(
object,
data = c("train", "test"),
features = NULL,
grid_size = 20,
shap_sample = 5,
vi_iterations = 10,
seed = 123,
loss_function = NULL
)Invisibly returns a list with variable importance, optional model profiles and SHAP values.
A fastml object.
Character string specifying which data to use for explanations:
"train" (default) uses training data, "test" uses held-out test data.
Using test data provides explanations that better reflect model generalization,
while training data explanations may be influenced by overfitting.
Character vector of feature names for partial dependence (model profiles). Default NULL.
Number of grid points for partial dependence. Default 20.
Integer number of observations from the selected data source to compute SHAP values for. Default 5.
Integer. Number of permutations for variable importance (B). Default 10.
Integer. A value specifying the random seed.
Function. The loss function for model_parts.
If NULL and task = 'classification', defaults to DALEX::loss_cross_entropy.
If NULL and task = 'regression', defaults to DALEX::loss_root_mean_square.