gmm(g,x,t0=NULL,gradv=NULL, type=c("twoStep","cue","iterative"),
wmatrix = c("optimal","ident"), vcov=c("HAC","iid"),
kernel=c("Quadratic Spectral","Truncated", "Bartlett",
"Parzen", "Tukey-Hanning"),crit=10e-7,bw = bwAndrews2,
prewhite = FALSE, ar.method = "ols", approx="AR(1)",tol = 1e-7,
itermax=100,intercept=TRUE,optfct=c("optim","optimize"), ...)
numericDeriv
is used. It is of course strongly suggested to provide thisHAC
for more details)bwAndrews2
which is proposed by Andrews (1991). The alternative is bwNeweyWest2
of NeTRUE
or greater than 0 a VAR model of order as.integer(prewhite)
is fitted via ar
with method "ols"
and demean = FALSE
.method
argument passed to ar
for prewhitening.bwAndrews2
.tol
are used for computing the covariance matrix, all other weights are treated as 0.optim
.weightsAndrews2
and bwAndrews2
are simply modified version of weightsAndrews
and bwAndrews
from the package sandwich. The modifications have been made so that the argument x can be a matrix instead of an object of class lm or glm. The details on how is works can be found on the sandwich manual.If we want to estimate a model like $Y_t = \theta_1 + X_{2t}\theta_2 + \cdots + X_{k}\theta_k + \epsilon_t$ using the moment conditions $Cov(\epsilon_tH_t)=0$, where $H_t$ is a vector of $Nh$ instruments, than we can define "g" like we do for lm
. We would have g = y~x2+x3+
lm
, $Y_t$ can be a $Ny \times 1$ vector which would imply that $k=Nh \times Ny$. The intercept is included by default so you do not have to add a column of ones to the matrix $H$. You do not need to provide the gradiant in that case since in that case it is embedded in gmm
.
The following explains the last example bellow. Thanks to Dieter Rozenich, a student from the Vienna Universtiy of Economics and Business Administration. He suggested that it would help to understand the implementation of the jacobian.
For the two parameters of a normal distribution $(\mu,\sigma)$ we have the following three moment conditions: $$m_{1} = \mu - x_{i}$$ $$m_{2} = \sigma^2 - (x_{i}-\mu)^2$$ $$m_{3} = x_{i}^{3} - \mu (\mu^2+3\sigma^{2})$$ $m_{1},m_{2}$ can be directly obtained by the definition of $(\mu,\sigma)$. The third moment condition comes from the third derivative of the moment generating function (MGF) $$M_{X}(t) = exp\Big(\mu t + \frac{\sigma^{2}t^{2}}{2}\Big)$$ evaluated at $(t=0)$. Note that we have more equations (3) than unknown parameters (2). The Jacobian of these two conditions is (it should be an array but I can't make it work):
$$1~~~~~~~~~~ 0$$ $$-2\mu+2x ~~~~~ 2\sigma$$ $$-3\mu^{2}-3\sigma^{2} ~~~~ -6\mu\sigma$$
The functions 'summary' is used to obtain and print a summary of the results. It also compute the J-test of overidentying restriction
The object of class "gmm" is a list containing:
par: $k\times 1$ vector of parameters
vcov: the covariance matrix of the parameter
objective: the value of the objective function $\| var(\bar{g})^{-1/2}\bar{g}\|^2$
Andrews DWK (1991), Heteroskedasticity and Autocorrelation Consistent Covariance Matrix Estimation. Econometrica, 59, 817--858.
Newey WK & West KD (1987), A Simple, Positive Semi-Definite, Heteroskedasticity and Autocorrelation Consistent Covariance Matrix. Econometrica, 55, 703--708.
Newey WK & West KD (1994), Automatic Lag Selection in Covariance Matrix Estimation. Review of Economic Studies, 61, 631-653.
Hansen, L.P. (1982), Large Sample Properties of Generalized Method of Moments Estimators. Econometrica, 50, 1029-1054,
Hansen, L.P. and Heaton, J. and Yaron, A.(1996),
Finit-Sample Properties of Some Alternative GMM Estimators.
Journal of Business and Economic Statistics, 14
262-280.
g <- function(tet,x) { n <- nrow(x) u <- (x[7:n] - tet[1] - tet[2]*x[6:(n-1)] - tet[3]*x[5:(n-2)]) f <- cbind(u,u*x[4:(n-3)],u*x[3:(n-4)],u*x[2:(n-5)],u*x[1:(n-6)]) return(f) }
Dg <- function(tet,x) { n <- nrow(x) xx <- cbind(rep(1,(n-6)),x[6:(n-1)],x[5:(n-2)]) H <- cbind(rep(1,(n-6)),x[4:(n-3)],x[3:(n-4)],x[2:(n-5)],x[1:(n-6)]) f <- -crossprod(H,xx)/(n-6) return(f) } n = 500 set.seed(123) phi<-c(.2,.7) thet <- 0.2 sd <- .2 x <- matrix(arima.sim(n=n,list(order=c(2,0,1),ar=phi,ma=thet,sd=sd)),ncol=1)
res_2s <- gmm(g,x,c(0,.3,.6),gradv=Dg) summary(res_2s)
res_iter <- gmm(g,x,c(0,.3,.6),gradv=Dg,type="iterative") summary(res_iter)
# The same model but with g as a formula.... much simpler in that case
y <- x[7:n] ym1 <- x[6:(n-1)] ym2 <- x[5:(n-2)]
H <- cbind(x[4:(n-3)],x[3:(n-4)],x[2:(n-5)],x[1:(n-6)]) g <- y~ym1+ym2 x <- H
res_2s <- gmm(g,x) summary(res_2s)
res_iter <- gmm(g,x,type="iterative") summary(res_iter)
## The following example has been provided by Dieter Rozenich (see details). # It generates normal random numbers and uses the GMM to estimate # mean and sd. #------------------------------------------------------------------------------- # Random numbers of a normal distribution # First we generate normally distributed random numbers and compute the two parameters: n <- 1000 x <- rnorm(n, mean = 4, sd = 2) # Implementing the 3 moment conditions g <- function(tet,x) { m1 <- (tet[1]-x) m2 <- (tet[2]^2 - (x - tet[1])^2) m3 <- x^3-tet[1]*(tet[1]^2+3*tet[2]^2) f <- cbind(m1,m2,m3) return(f) } # Implementing the jacobian Dg <- function(tet,x) { jacobian <- matrix(c( 1, 2*(-tet[1]+mean(x)), -3*tet[1]^2-3*tet[2]^2,0, 2*tet[2],-6*tet[1]*tet[2]), nrow=3,ncol=2) return(jacobian) } # Now we want to estimate the two parameters using the GMM. require(gmm) resgmm <- gmm(g,x,c(0,0),grad=Dg)