summary.lm
Summarizing Linear Model Fits
summary
method for class "lm"
.
- Keywords
- models, regression
Usage
# S3 method for lm
summary(object, correlation = FALSE, symbolic.cor = FALSE, …)# S3 method for summary.lm
print(x, digits = max(3, getOption("digits") - 3),
symbolic.cor = x$symbolic.cor,
signif.stars = getOption("show.signif.stars"), …)
Arguments
- object
an object of class
"lm"
, usually, a result of a call tolm
.- x
an object of class
"summary.lm"
, usually, a result of a call tosummary.lm
.- correlation
logical; if
TRUE
, the correlation matrix of the estimated parameters is returned and printed.- digits
the number of significant digits to use when printing.
- symbolic.cor
logical. If
TRUE
, print the correlations in a symbolic form (seesymnum
) rather than as numbers.- signif.stars
logical. If
TRUE
, ‘significance stars’ are printed for each coefficient.- …
further arguments passed to or from other methods.
Details
print.summary.lm
tries to be smart about formatting the
coefficients, standard errors, etc. and additionally gives
‘significance stars’ if signif.stars
is TRUE
.
Aliased coefficients are omitted in the returned object but restored
by the print
method.
Correlations are printed to two decimal places (or symbolically): to
see the actual correlations print summary(object)$correlation
directly.
Value
The function summary.lm
computes and returns a list of summary
statistics of the fitted linear model given in object
, using
the components (list elements) "call"
and "terms"
from its argument, plus
the weighted residuals, the usual residuals
rescaled by the square root of the weights specified in the call to
lm
.
a \(p \times 4\) matrix with columns for the estimated coefficient, its standard error, t-statistic and corresponding (two-sided) p-value. Aliased coefficients are omitted.
named logical vector showing if the original coefficients are aliased.
the square root of the estimated variance of the random
error
$$\hat\sigma^2 = \frac{1}{n-p}\sum_i{w_i R_i^2},$$
where \(R_i\) is the \(i\)-th residual, residuals[i]
.
degrees of freedom, a 3-vector \((p, n-p, p*)\), the first being the number of non-aliased coefficients, the last being the total number of coefficients.
(for models including non-intercept terms) a 3-vector with the value of the F-statistic with its numerator and denominator degrees of freedom.
\(R^2\), the ‘fraction of variance explained by the model’, $$R^2 = 1 - \frac{\sum_i{R_i^2}}{\sum_i(y_i- y^*)^2},$$ where \(y^*\) is the mean of \(y_i\) if there is an intercept and zero otherwise.
the above \(R^2\) statistic ‘adjusted’, penalizing for higher \(p\).
a \(p \times p\) matrix of (unscaled) covariances of the \(\hat\beta_j\), \(j=1, \dots, p\).
the correlation matrix corresponding to the above
cov.unscaled
, if correlation = TRUE
is specified.
(only if correlation
is true.) The value
of the argument symbolic.cor
.
from object
, if present there.
See Also
The model fitting function lm
, summary
.
Function coef
will extract the matrix of coefficients
with standard errors, t-statistics and p-values.
Examples
library(stats)
# NOT RUN {
##-- Continuing the lm(.) example:
coef(lm.D90) # the bare coefficients
sld90 <- summary(lm.D90 <- lm(weight ~ group -1)) # omitting intercept
sld90
coef(sld90) # much more
## model with *aliased* coefficient:
lm.D9. <- lm(weight ~ group + I(group != "Ctl"))
Sm.D9. <- summary(lm.D9.)
Sm.D9. # shows the NA NA NA NA line
stopifnot(length(cc <- coef(lm.D9.)) == 3, is.na(cc[3]),
dim(coef(Sm.D9.)) == c(2,4), Sm.D9.$df == c(2, 18, 3))
# }