parameters (version 0.14.0)

model_parameters: Model Parameters

Description

Compute and extract model parameters. See the documentation for your object's class:

Usage

model_parameters(model, ...)

parameters(model, ...)

Arguments

model

Statistical Model.

...

Arguments passed to or from other methods. Non-documented arguments are digits, p_digits, ci_digits and footer_digits to set the number of digits for the output. group can also be passed to the print() method. See details in print.parameters_model and 'Examples' in model_parameters.default.

Value

A data frame of indices related to the model's parameters.

Labeling the Degrees of Freedom

Throughout the parameters package, we decided to label the residual degrees of freedom df_error. The reason for this is that these degrees of freedom not always refer to the residuals. For certain models, they refer to the estimate error - in a linear model these are the same, but in - for instance - any mixed effects model, this isn't strictly true. Hence, we think that df_error is the most generic label for these degrees of freedom.

Interpretation of Interaction Terms

Note that the interpretation of interaction terms depends on many characteristics of the model. The number of parameters, and overall performance of the model, can differ or not between a * b a : b, and a / b, suggesting that sometimes interaction terms give different parameterizations of the same model, but other times it gives completely different models (depending on a or b being factors of covariates, included as main effects or not, etc.). Their interpretation depends of the full context of the model, which should not be inferred from the parameters table alone - rather, we recommend to use packages that calculate estimated marginal means or marginal effects, such as modelbased, emmeans or ggeffects. To raise awareness for this issue, you may use print(...,show_formula=TRUE) to add the model-specification to the output of the print() method for model_parameters().

Details

Standardization of model coefficients

Standardization is based on standardize_parameters(). In case of standardize = "refit", the data used to fit the model will be standardized and the model is completely refitted. In such cases, standard errors and confidence intervals refer to the standardized coefficient. The default, standardize = "refit", never standardizes categorical predictors (i.e. factors), which may be a different behaviour compared to other R packages or other software packages (like SPSS). To mimic behaviour of SPSS or packages such as lm.beta, use standardize = "basic".

Methods of standardization

For full details, please refer to standardize_parameters().

refit

This method is based on a complete model re-fit with a standardized version of the data. Hence, this method is equal to standardizing the variables before fitting the model. It is the "purest" and the most accurate (Neter et al., 1989), but it is also the most computationally costly and long (especially for heavy models such as Bayesian models).The robust argument (default to FALSE) enables a robust standardization of data, i.e., based on the median and MAD instead of the mean and SD.

posthoc

Post-hoc standardization of the parameters, aiming at emulating the results obtained by "refit" without refitting the model. The coefficients are divided by the standard deviation (or MAD if robust=TRUE) of the outcome (which becomes their expression 'unit'). Then, the coefficients related to numeric variables are additionally multiplied by the standard deviation (or MAD) of the related terms, so that they correspond to changes of 1 SD of the predictor. This does not apply to binary variables or factors, so the coefficients are still related to changes in levels. This method is not accurate and tend to give aberrant results when interactions are specified.

smart

(Standardization of Model's parameters with Adjustment, Reconnaissance and Transformation - experimental): Similar to method="posthoc" in that it does not involve model refitting. The difference is that the SD (or MAD) of the response is computed on the relevant section of the data. For instance, if a factor with 3 levels A (the intercept), B and C is entered as a predictor, the effect corresponding to B vs. A will be scaled by the variance of the response at the intercept only. As a results, the coefficients for effects of factors are similar to a Glass' delta.

basic

This method is similar to method="posthoc", but treats all variables as continuous: it also scales the coefficient by the standard deviation of model's matrix' parameter of factors levels (transformed to integers) or binary predictors. Although being inappropriate for these cases, this method is the one implemented by default in other software packages, such as lm.beta::lm.beta().

pseudo (for 2-level (G)LMMs only)

In this (post-hoc) method, the response and the predictor are standardized based on the level of prediction (levels are detected with check_heterogeneity: Predictors are standardized based on their SD at level of prediction (see also demean). The outcome (in linear LMMs) is standardized based on a fitted random-intercept-model, where sqrt(random-intercept-variance) is used for level 2 predictors, and sqrt(residual-variance) is used for level 1 predictors (Hoffman 2015, page 342). A warning is given when a within-group variable is found to have access between-group variance.

References

  • Hoffman, L. (2015). Longitudinal analysis: Modeling within-person fluctuation and change. Routledge.

  • Neter, J., Wasserman, W., & Kutner, M. H. (1989). Applied linear regression models.

See Also

standardize_names() to rename columns into a consistent, standardized naming scheme.