gel(g, x, tet0, gradv = NULL, smooth = FALSE, type = c("EL","ET","CUE","ETEL"),
kernel = c("Truncated", "Bartlett"), bw = bwAndrews,
approx = c("AR(1)", "ARMA(1,1)"), prewhite = 1, ar.method = "ols", tol_weights = 1e-7,
tol_lam = 1e-9, tol_obj = 1e-9, tol_mom = 1e-9, maxiterlam = 100, constraint = FALSE,
optfct = c("optim", "optimize", "nlminb"), optlam = c("nlminb", "optim", "iter"), Lambdacontrol = list(), model = TRUE,
X = FALSE, Y = FALSE, TypeGel = "baseGel", alpha = NULL, ...)numericDeriv is used. It is of course strongly suggested to provide thiskernHAC for more details) and to smooth the moment conditions if "smooth" is set to TRUE. Only two types obwAndrews which is proposed by Andrews (1991). The alternative is bwNeweyWest of Newey-TRUE or greater than 0 a VAR model of order as.integer(prewhite) is fitted via ar with method "ols" and demean = FALSE.method argument passed to ar for prewhitening.bwAndrews.tol are used for computing the covariance matrix, all other weights are treated as 0.tol_lamb (see getLamb)getLamb).getLamb).constrOptim to learn how it works. In particular, if you choose to use it, you need to provide "ui" and "ci" in order to impose the conoptim, nlminb or constrOptimTRUE the corresponding components of the fit (the model frame, the model matrix, the response) are returned if g is a formula.getModel. It allows developers to extand the package and create other GEL methods.type. See Chausse (2011)optim, optimize or constrOptim.The functions 'summary' is used to obtain and print a summary of the results.
The object of class "gel" is a list containing at least the following:
getLamb)getLamb)optim, optimize or constrOptim)terms object used when g is a formula.lm. We would have g = y~x2+x3+...+xk and the argument "x" above would become the matrix H of instruments. As for lm, $Y_t$ can be a $Ny \times 1$ vector which would imply that $k=Nh \times Ny$. The intercept is included by default so you do not have to add a column of ones to the matrix $H$. You do not need to provide the gradiant in that case since in that case it is embedded in gel. The intercept can be removed by adding -1 to the formula. In that case, the column of ones need to be added manually to H.If "smooth" is set to TRUE, the sample moment conditions $\sum_{t=1}^n g(\theta,x_t)$ is replaced by: $\sum_{t=1}^n g^k(\theta,x_t)$, where $g^k(\theta,x_t)=\sum_{i=-r}^r k(i) g(\theta,x_{t+i})$, where $r$ is a truncated parameter that depends on the bandwidth and $k(i)$ are normalized weights so that they sum to 1.
The method solves
$\hat{\theta} = \arg\min \left[\arg\max_\lambda \frac{1}{n}\sum_{t=1}^n \rho(
Andrews DWK (1991), Heteroskedasticity and Autocorrelation Consistent Covariance Matrix Estimation. Econometrica, 59, 817--858.
Kitamura, Yuichi (1997), Empirical Likelihood Methods With Weakly Dependent Processes. The Annals of Statistics, 25, 2084-2102.
Newey, W.K. and Smith, R.J. (2004), Higher Order Properties of GMM and Generalized Empirical Likelihood Estimators. Econometrica, 72, 219-255.
Smith, R.J. (2004), GEL Criteria for Moment Condition Models. Working paper, CEMMAP.
Newey WK & West KD (1987), A Simple, Positive Semi-Definite, Heteroskedasticity and Autocorrelation Consistent Covariance Matrix. Econometrica, 55, 703--708.
Newey WK & West KD (1994), Automatic Lag Selection in Covariance Matrix Estimation. Review of Economic Studies, 61, 631-653.
Schennach, Susanne, M. (2007), Point Estimation with Exponentially Tilted Empirical Likelihood. Econometrica, 35, 634-672.
Zeileis A (2006), Object-oriented Computation of Sandwich Estimators.
Journal of Statistical Software, 16(9), 1--16.
URL
Chausse (2010), Computing Generalized Method of Moments and Generalized Empirical Likelihood with R.
Journal of Statistical Software, 34(11), 1--35.
URL
Chausse (2011), Generalized Empirical likelihood for a continumm of moment conditions. Working Paper, Department of Economics, University of Waterloo.
# First, an exemple with the fonction g()
g <- function(tet, x)
{
n <- nrow(x)
u <- (x[7:n] - tet[1] - tet[2]*x[6:(n-1)] - tet[3]*x[5:(n-2)])
f <- cbind(u, u*x[4:(n-3)], u*x[3:(n-4)], u*x[2:(n-5)], u*x[1:(n-6)])
return(f)
}
Dg <- function(tet,x)
{
n <- nrow(x)
xx <- cbind(rep(1, (n-6)), x[6:(n-1)], x[5:(n-2)])
H <- cbind(rep(1, (n-6)), x[4:(n-3)], x[3:(n-4)], x[2:(n-5)], x[1:(n-6)])
f <- -crossprod(H, xx)/(n-6)
return(f)
}
n = 200
phi<-c(.2, .7)
thet <- 0.2
sd <- .2
set.seed(123)
x <- matrix(arima.sim(n = n, list(order = c(2, 0, 1), ar = phi, ma = thet, sd = sd)), ncol = 1)
res <- gel(g, x, c(0, .3, .6), grad = Dg)
summary(res)
# The same model but with g as a formula.... much simpler in that case
y <- x[7:n]
ym1 <- x[6:(n-1)]
ym2 <- x[5:(n-2)]
H <- cbind(x[4:(n-3)], x[3:(n-4)], x[2:(n-5)], x[1:(n-6)])
g <- y ~ ym1 + ym2
x <- H
res <- gel(g, x, c(0, .3, .6))
summary(res)Run the code above in your browser using DataLab