ridgeVAR1(Y, lambdaA=0, lambdaP=0,
targetA=matrix(0, dim(Y)[1], dim(Y)[1]),
targetP=matrix(0, dim(Y)[1], dim(Y)[1]), targetPtype="none",
fitA="ml", zerosA=matrix(nrow=0, ncol=2),
zerosAfit="sparse", zerosP=matrix(nrow=0, ncol=2), cliquesP=list(),
separatorsP=list(), unbalanced=matrix(nrow=0, ncol=2), diagP=FALSE,
efficient=TRUE, nInit=100, minSuccDiff=0.001)array containing the data. The first, second and third dimensions correspond to covariates, time and samples, respectively. The data are assumed to be centered covariate-wise.numeric of length 1) to be used in the estimation of $\mathbf{A}$, the matrix with regression coefficients.numeric of length 1) to be used in the estimation of the inverse error covariance matrix ($\mathbf{\Omega}_{\varepsilon} (=\mathbf{\Sigma_{\varepsilon}^{-1}})$): the precision matrix of the errors.matrix to which the matrix $\mathbf{A}$ is to be shrunken.matrix to which the in the inverse error covariance matrix, the precision matrix, is to be shrunken.character. If fitA="ml" the parameter $\mathbf{A}$ is estimate by (penalized) maximum likelihood. If fitA="ss" the parameter $\mathbf{A}$ is estimate by (penalized) sum of squares. The latter being much faster as icharacter indicating the type of target to be used for the precision matrix. When specified it overrules the targetP-option. See the default.target-function for the options.matrix with indices of entries of $\mathbf{A}$ that are constrained to zero. The matrix comprises two columns, each row corresponding to an entry of $\mathbf{A}$. The first column contains the row indices and the second the column indices.character, either "sparse" or "dense". With "sparse", the matrix $\mathbf{A}$ is assumed to contain many zeros and a computational efficient implementation of its estimation is employed. If "dense", it is assumed that $\mathbf{A}$ contains matrix-object with indices of entries of the precision matrix that are constrained to zero. The matrix comprises two columns, each row corresponding to an entry of the adjacency matrix. The first column contains the row indices and the secolist-object containing the node indices per clique as object from the rip-function.list-object containing the node indices per clique as object from the rip-function.matrix with two columns, indicating the unbalances in the design. Each row represents a missing design point in the (time x individual)-layout. The first and second column indicate the time and individual (respectively) specifics of the mislogical, indicates whether the inverse error covariance matrix is assumed to be diagonal.logical, affects estimation of $\mathbf{A}$. Details below.numeric of length 1) to be used in maximum likelihood estimation.numeric of length 1) between estimates of two successive iterations to be achieved.matrix with lag one auto-regressive coefficients.matrix $\mathbf{\Omega}_{\varepsilon} (=\mathbf{\Sigma_{\varepsilon}^{-1}})$.numeric of length one: ridge penalty used in the estimation of $\mathbf{A}$.numeric of length one: ridge penalty used in the estimation of inverse error covariance matrix $\mathbf{\Omega}_{\varepsilon} (=\mathbf{\Sigma_{\varepsilon}^{-1}})$.diagP=TRUE, no penalization to estimation of the covariance matrix is applied. Consequently, the arguments lambdaP and targetP are ignored (if supplied).The ridge ML estimator employs the following estimator of the variance of the VAR(1) process:
$$\frac{1}{n (\mathcal{T} - 1)} \sum_{i=1}^{n} \sum_{t=2}^{\mathcal{T}} \mathbf{Y}_{\ast,i,t} \mathbf{Y}_{\ast,i,t}^{\mathrm{T}}.$$
This is used when efficient=FALSE. However, a more efficient estimator of this variance can be used
$$\frac{1}{n \mathcal{T}} \sum_{i=1}^{n} \sum_{t=1}^{\mathcal{T}} \mathbf{Y}_{\ast,i,t} \mathbf{Y}_{\ast,i,t}^{\mathrm{T}},$$
which is achieved by setting when efficient=TRUE. Both estimators are adjusted accordingly when dealing with an unbalanced design.
loglikLOOCVVAR1, ridgeP, default.target.# set dimensions (p=covariates, n=individuals, T=time points)
p <- 3; n <- 4; T <- 10
# set model parameters
SigmaE <- diag(p)/4
A <- createA(p, "chain")
# generate data
Y <- dataVAR1(n, T, A, SigmaE)
# fit VAR(1) model
ridgeVAR1(Y, 1, 1)$ARun the code above in your browser using DataLab