summ
prints output for a regression model in a fashion similar to
summary
, but formatted differently with more options.
# S3 method for glm
summ(model, scale = FALSE, vifs = FALSE, confint = FALSE,
ci.width = 0.95, robust = FALSE, robust.type = "HC3", cluster = NULL,
digits = getOption("jtools-digits", default = 2), odds.ratio = FALSE,
model.info = TRUE, model.fit = TRUE, pvals = TRUE, n.sd = 1,
center = FALSE, scale.response = FALSE, ...)
A glm
object.
If TRUE
, reports standardized regression
coefficients. Default is FALSE
.
If TRUE
, adds a column to output with variance inflation
factors (VIF). Default is FALSE
.
Show confidence intervals instead of standard errors? Default
is FALSE
.
A number between 0 and 1 that signifies the width of the
desired confidence interval. Default is .95
, which corresponds
to a 95% confidence interval. Ignored if confint = FALSE
.
If TRUE
, reports heteroskedasticity-robust standard
errors instead of conventional SEs. These are also known as Huber-White
standard errors.
Default is FALSE
.
This requires the sandwich
package to compute the
standard errors.
Only used if robust = TRUE
. Specifies the type of
robust standard errors to be used by sandwich
. By default, set to
"HC3"
. See details for more on options.
For clustered standard errors, provide the column name of the cluster variable in the input data frame (as a string). Alternately, provide a vector of clusters.
An integer specifying the number of digits past the decimal to
report in the output. Default is 2. You can change the default number of
digits for all jtools functions with
options("jtools-digits" = digits)
where digits is the desired
number.
If TRUE
, reports exponentiated coefficients with
confidence intervals for exponential models like logit and Poisson models.
This quantity is known as an odds ratio for binary outcomes and incidence
rate ratio for count models.
Toggles printing of basic information on sample size, name of DV, and number of predictors.
Toggles printing of R-squared and adjusted R-squared.
Show p values and significance stars? If FALSE
, these
are not printed. Default is TRUE
, except for merMod objects (see
details).
If scale = TRUE
, how many standard deviations should
predictors be divided by? Default is 1, though some suggest 2.
If you want coefficients for mean-centered variables but don't
want to standardize, set this to TRUE
.
Should standardization apply to response variable?
Default is FALSE
.
This just captures extra arguments that may only work for other types of models.
If saved, users can access most of the items that are returned in the output (and without rounding).
The outputted table of variables and coefficients
The model for which statistics are displayed. This would be
most useful in cases in which scale = TRUE
.
Much other information can be accessed as attributes.
By default, this function will print the following items to the console:
The sample size
The name of the outcome variable
The (Pseudo-)R-squared value and AIC/BIC.
A table with regression coefficients, standard errors, t-values, and p values.
There are several options available for robust.type
. The heavy
lifting is done by vcovHC
, where those are better
described.
Put simply, you may choose from "HC0"
to "HC5"
. Based on the
recommendation of the developers of sandwich, the default is set to
"HC3"
. Stata's default is "HC1"
, so that choice may be better
if the goal is to replicate Stata's output. Any option that is understood by
vcovHC
will be accepted. Cluster-robust standard errors are computed
if cluster
is set to the name of the input data's cluster variable
or is a vector of clusters.
The scale
and center
options are performed via
refitting
the model with scale_lm
and center_lm
,
respectively. Each of those in turn uses gscale
for the
mean-centering and scaling.
King, G., & Roberts, M. E. (2015). How robust standard errors expose methodological problems they do not fix, and what to do about it. Political Analysis, 23(2), 159<U+2013>179. https://doi.org/10.1093/pan/mpu015
Lumley, T., Diehr, P., Emerson, S., & Chen, L. (2002). The Importance of the Normality Assumption in Large Public Health Data Sets. Annual Review of Public Health, 23, 151<U+2013>169. https://doi.org/10.1146/annurev.publhealth.23.100901.140546
scale_lm
can simply perform the standardization if
preferred.
gscale
does the heavy lifting for mean-centering and scaling
behind the scenes.
# NOT RUN {
## Dobson (1990) Page 93: Randomized Controlled Trial :
counts <- c(18,17,15,20,10,20,25,13,12)
outcome <- gl(3,1,9)
treatment <- gl(3,3)
print(d.AD <- data.frame(treatment, outcome, counts))
glm.D93 <- glm(counts ~ outcome + treatment, family = poisson)
# Summarize with standardized coefficients
summ(glm.D93, scale = TRUE)
# }
Run the code above in your browser using DataLab