gmm(g,x,t0=NULL,gradv=NULL, type=c("twoStep","cue","iterative"),
wmatrix = c("optimal","ident"), vcov=c("HAC","iid"),
kernel=c("Quadratic Spectral","Truncated", "Bartlett",
"Parzen", "Tukey-Hanning"),crit=10e-7,bw = bwAndrews2,
prewhite = FALSE, ar.method = "ols", approx="AR(1)",tol = 1e-7,
itermax=100,intercept=TRUE,optfct=c("optim","optimize"), ...)numericDeriv is used. It is of course strongly suggested to provide thisHAC for more details)bwAndrews2 which is proposed by Andrews (1991). The alternative is bwNeweyWest2 of NeTRUE or greater than 0 a VAR model of order as.integer(prewhite) is fitted via ar with method "ols" and demean = FALSE.method argument passed to ar for prewhitening.bwAndrews2.tol are used for computing the covariance matrix, all other weights are treated as 0.optim.weightsAndrews2 and bwAndrews2 are simply modified version of weightsAndrews and bwAndrews from the package sandwich. The modifications have been made so that the argument x can be a matrix instead of an object of class lm or glm. The details on how is works can be found on the sandwich manual.If we want to estimate a model like $Y_t = \theta_1 + X_{2t}\theta_2 + \cdots + X_{k}\theta_k + \epsilon_t$ using the moment conditions $Cov(\epsilon_tH_t)=0$, where $H_t$ is a vector of $Nh$ instruments, than we can define "g" like we do for lm. We would have g = y~x2+x3+lm, $Y_t$ can be a $Ny \times 1$ vector which would imply that $k=Nh \times Ny$. The intercept is included by default so you do not have to add a column of ones to the matrix $H$. You do not need to provide the gradiant in that case since in that case it is embedded in gmm.
The functions 'summary' is used to obtain and print a summary of the results. It also compute the J-test of overidentying restriction
The object of class "gmm" is a list containing:
par: $k\times 1$ vector of parameters
vcov: the covariance matrix of the parameter
objective: the value of the objective function $\| var(\bar{g})^{-1/2}\bar{g}\|^2$
Andrews DWK (1991), Heteroskedasticity and Autocorrelation Consistent Covariance Matrix Estimation. Econometrica, 59, 817--858.
Newey WK & West KD (1987), A Simple, Positive Semi-Definite, Heteroskedasticity and Autocorrelation Consistent Covariance Matrix. Econometrica, 55, 703--708.
Newey WK & West KD (1994), Automatic Lag Selection in Covariance Matrix Estimation. Review of Economic Studies, 61, 631-653.
Hansen, L.P. (1982), Large Sample Properties of Generalized Method of Moments Estimators. Econometrica, 50, 1029-1054,
Hansen, L.P. and Heaton, J. and Yaron, A.(1996),
Finit-Sample Properties of Some Alternative GMM Estimators.
Journal of Business and Economic Statistics, 14
262-280.
g <- function(tet,x) { n <- nrow(x) u <- (x[7:n] - tet[1] - tet[2]*x[6:(n-1)] - tet[3]*x[5:(n-2)]) f <- cbind(u,u*x[4:(n-3)],u*x[3:(n-4)],u*x[2:(n-5)],u*x[1:(n-6)]) return(f) }
Dg <- function(tet,x) { n <- nrow(x) xx <- cbind(rep(1,(n-6)),x[6:(n-1)],x[5:(n-2)]) H <- cbind(rep(1,(n-6)),x[4:(n-3)],x[3:(n-4)],x[2:(n-5)],x[1:(n-6)]) f <- -crossprod(H,xx)/(n-6) return(f) } n = 500 set.seed(123) phi<-c(.2,.7) thet <- 0.2 sd <- .2 x <- matrix(arima.sim(n=n,list(order=c(2,0,1),ar=phi,ma=thet,sd=sd)),ncol=1)
res_2s <- gmm(g,x,c(0,.3,.6),gradv=Dg) summary(res_2s)
res_iter <- gmm(g,x,c(0,.3,.6),gradv=Dg,type="iterative") summary(res_iter)
# The same model but with g as a formula.... much simpler in that case
y <- x[7:n] ym1 <- x[6:(n-1)] ym2 <- x[5:(n-2)]
H <- cbind(x[4:(n-3)],x[3:(n-4)],x[2:(n-5)],x[1:(n-6)]) g <- y~ym1+ym2 x <- H
res_2s <- gmm(g,x) summary(res_2s)
res_iter <- gmm(g,x,type="iterative") summary(res_iter)