Standardization Methods
refit: This method is based on a complete model re-fit with a
standardized version of the data. Hence, this method is equal to
standardizing the variables before fitting the model. It is the "purest" and
the most accurate (Neter et al., 1989), but it is also the most
computationally costly and long (especially for heavy models such as Bayesian
models). This method is particularly recommended for complex models that
include interactions or transformations (e.g., polynomial or spline terms).
The robust
(default to FALSE
) argument enables a robust standardization
of data, i.e., based on the median
and MAD
instead of the mean
and
SD
. See standardize()
for more details.
Note that standardize_parameters(method = "refit")
may not return
the same results as fitting a model on data that has been standardized with
standardize()
; standardize_parameters()
used the data used by the model
fitting function, which might not be same data if there are missing values.
see the remove_na
argument in standardize()
.
posthoc: Post-hoc standardization of the parameters, aiming at
emulating the results obtained by "refit" without refitting the model. The
coefficients are divided by the standard deviation (or MAD if robust
) of
the outcome (which becomes their expression 'unit'). Then, the coefficients
related to numeric variables are additionally multiplied by the standard
deviation (or MAD if robust
) of the related terms, so that they correspond
to changes of 1 SD of the predictor (e.g., "A change in 1 SD of x
is
related to a change of 0.24 of the SD of y
). This does not apply to binary
variables or factors, so the coefficients are still related to changes in
levels. This method is not accurate and tend to give aberrant results when
interactions are specified.
basic: This method is similar to method = "posthoc"
, but treats all
variables as continuous: it also scales the coefficient by the standard
deviation of model's matrix' parameter of factors levels (transformed to
integers) or binary predictors. Although being inappropriate for these cases,
this method is the one implemented by default in other software packages,
such as lm.beta::lm.beta()
.
smart (Standardization of Model's parameters with Adjustment,
Reconnaissance and Transformation - experimental): Similar to method = "posthoc"
in that it does not involve model refitting. The difference is
that the SD (or MAD if robust
) of the response is computed on the relevant
section of the data. For instance, if a factor with 3 levels A (the
intercept), B and C is entered as a predictor, the effect corresponding to B
vs. A will be scaled by the variance of the response at the intercept only.
As a results, the coefficients for effects of factors are similar to a Glass'
delta.
pseudo (for 2-level (G)LMMs only): In this (post-hoc) method, the
response and the predictor are standardized based on the level of prediction
(levels are detected with performance::check_heterogeneity_bias()
): Predictors
are standardized based on their SD at level of prediction (see also
datawizard::demean()
); The outcome (in linear LMMs) is standardized based
on a fitted random-intercept-model, where sqrt(random-intercept-variance)
is used for level 2 predictors, and sqrt(residual-variance)
is used for
level 1 predictors (Hoffman 2015, page 342). A warning is given when a
within-group variable is found to have access between-group variance.
Transformed Variables
When the model's formula contains transformations (e.g. y ~ exp(X)
) method = "refit"
will give different results compared to method = "basic"
("posthoc"
and "smart"
do not support such transformations): While
"refit"
standardizes the data prior to the transformation (e.g.
equivalent to exp(scale(X))
), the "basic"
method standardizes the
transformed data (e.g. equivalent to scale(exp(X))
).
See the Transformed Variables section in standardize.default()
for more
details on how different transformations are dealt with when method = "refit"
.
Confidence Intervals
The returned confidence intervals are re-scaled versions of the
unstandardized confidence intervals, and not "true" confidence intervals of
the standardized coefficients (cf. Jones & Waller, 2015).
Generalized Linear Models
Standardization for generalized linear models (GLM, GLMM, etc) is done only
with respect to the predictors (while the outcome remains as-is,
unstandardized) - maintaining the interpretability of the coefficients (e.g.,
in a binomial model: the exponent of the standardized parameter is the OR of
a change of 1 SD in the predictor, etc.)
Dealing with Factors
standardize(model)
or standardize_parameters(model, method = "refit")
do
not standardize categorical predictors (i.e. factors) / their
dummy-variables, which may be a different behaviour compared to other R
packages (such as lm.beta) or other software packages (like SPSS). To
mimic such behaviours, either use standardize_parameters(model, method = "basic")
to obtain post-hoc standardized parameters, or standardize the data
with datawizard::standardize(data, force = TRUE)
before fitting the
model.