lnam
fits the linear network autocorrelation model given by
$$y = W_1 y + X \beta + e, \quad e = W_2 e + \nu$$ where $y$ is a vector of responses, $X$ is a covariate matrix, $nu ~ Norm(0,sigma^2)$,
$$W_1 = \sum_{i=1}^p \rho_{1i} W_{1i}, \quad W_2 = \sum_{i=1}^q \rho_{2i} W_{2i},$$
and $W1_i$, $W2_i$ are (possibly valued) adjacency matrices.
Intuitively, $rho1$ is a vector of ``AR''-like parameters (parameterizing the autoregression of each $y$ value on its neighbors in the graphs of $W1$) while $rho2$ is a vector of ``MA''-like parameters (parameterizing the autocorrelation of each disturbance in $y$ on its neighbors in the graphs of $W2$). In general, the two models are distinct, and either or both effects may be selected by including the appropriate matrix arguments.
Model parameters are estimated by maximum likelihood, and asymptotic standard errors are provided as well; all of the above (and more) can be obtained by means of the appropriate print
and summary
methods. A plotting method is also provided, which supplies fit basic diagnostics for the estimated model. For purposes of comparison, fits may be evaluated against one of four null models:
-
meanstd
: mean and standard deviation estimated (default).
-
mean
: mean estimated; standard deviation assumed equal to 1.
-
std
: standard deviation estimated; mean assumed equal to 0.
-
none
: no parameters estimated; data assumed to be drawn from a standard normal density.
The default setting should be appropriate for the vast majority of cases, although the others may have use when fitting ``pure'' autoregressive models (e.g., without covariates). Although a major use of the lnam
is in controlling for network autocorrelation within a regression context, the model is subtle and has a variety of uses. (See the references below for suggestions.)