Learn R Programming

gmm (version 1.0-7)

gel: Generalized Empirical Likelihood estimation

Description

Function to estimate a vector of parameters based on moment conditions using the GEL method as presented by Newey-Smith(2004) and Anatolyev(2005).

Usage

gel(g,x,tet0,gradv=NULL,smooth=FALSE,type=c("EL","ET","CUE","ETEL"), vcov=c("HAC","iid"), 
    kernel = c("Bartlett", "Parzen", "Truncated", "Tukey-Hanning"), bw=bwAndrews2, 
    approx = c("AR(1)", "ARMA(1,1)"), prewhite = 1, ar.method = "ols", tol_weights = 1e-7, 
    tol_lam=1e-9, tol_obj = 1e-9, tol_mom = 1e-9,maxiterlam=1000,constraint=FALSE,
    intercept=TRUE,optfct=c("optim","optimize"), optlam=c("iter","numeric"),...)

Arguments

g
A function of the form $g(\theta,x)$ and which returns a $n \times q$ matrix with typical element $g_i(\theta,x_t)$ for $i=1,...q$ and $t=1,...,n$. This matrix is then used to build the q sample moment conditions. It can also be a formula if the model is
tet0
A $k \times 1$ vector of starting values. If the dimension of $\theta$ is one, see the argument "optfct".
x
The matrix or vector of data from which the function $g(\theta,x)$ is computed. If "g" is a formula, it is an $n \times Nh$ matrix of instruments (see details below).
gradv
A function of the form $G(\theta,x)$ which returns a $q\times k$ matrix of derivatives of $\bar{g}(\theta)$ with respect to $\theta$. By default, the numerical algorithm numericDeriv is used. It is of course strongly suggested to provide this
smooth
If set to TRUE, the moment function is smoothed as proposed by Kitamura(1997)
type
"EL" for empirical likelihood, "ET" for exponential tilting, "CUE" for continuous updated estimator and "ETEL" for exponentially tilted empirical likelihood of Schennach(2007).
vcov
Assumption on the properties of the random vector x. By default, x is a weakly dependant process. The "iid" option will only avoid using the HAC matrix to compute the covariance matrix of the parameter.
kernel
type of kernel used to compute the covariance matrix of the vector of sample moment conditions (see HAC for more details) and to smooth the moment conditions if "smooth" is set to TRUE.
bw
The method to compute the bandwidth parameter. By default it is bwAndrews2 which is proposed by Andrews (1991). The alternative is bwNeweyWest2 of Ne
prewhite
logical or integer. Should the estimating functions be prewhitened? If TRUE or greater than 0 a VAR model of order as.integer(prewhite) is fitted via ar with method "ols" and demean = FALSE.
ar.method
character. The method argument passed to ar for prewhitening.
approx
a character specifying the approximation method if the bandwidth has to be chosen by bwAndrews2.
tol_weights
numeric. Weights that exceed tol are used for computing the covariance matrix, all other weights are treated as 0.
tol_lam
Tolerance for $\lambda$ between two iterations. The algorithm stops when $\|\lambda_i -\lambda_{i-1}\|$ reaches tol_lamb (see get_lamb)
maxiterlam
The algorithm to compute $\lambda$ stops if there is no convergence after "maxiterlam" iterations (see get_lamb).
tol_obj
Tolerance for the gradiant of the objective function to compute $\lambda$ (see get_lamb).
intercept
If "g" is a formula, should the model include a constant? It should always be the case but the choice is yours.
optfct
Only when the dimension of $\theta$ is 1, you can choose between the algorithm optim or optimize. In that case, the former is unreliable. If
constraint
If set to TRUE, the constraint optimization algorithm is used. See constrOptim to learn how it works. In particular, if you choose to use it, you need to provide "ui" and "ci" in order to impose the con
tol_mom
It is the tolerance for the moment condition $\sum_{t=1}^n p_t g(\theta(x_t)=0$, where $p_t=\frac{1}{n}D\rho()$ is the implied probability. It adds a penalty if the solution diverges from its goal.
optlam
The default is "iter" which solves for $\lambda$ using the Newton iterative method get_lamb. If set to "numeric", the algorithm optim is used to compute $\lam
...
More options to give to optim, optimize or constrOptim.

Value

  • 'gel' returns an object of 'class' '"gel"'

    The functions 'summary' is used to obtain and print a summary of the results.

    The object of class "gel" is a list containing:

    par: $k\times 1$ vector of parameters

    lambda: $q \times 1$ vector of Lagrange multipliers.

    vcov_par: the covariance matrix of "par"

    vcov_lambda: the covariance matrix of "lambda"

    pt: The implied probabilities

    objective: the value of the objective function.

    conv_lambda: Convergence code for "lambda" (see get_lamb)

    conv_mes: Convergence message for "lambda" (see get_lamb)

    conv_par: Convergence code for "par" (see optim, optimize or constrOptim)

Details

weightsAndrews2 and bwAndrews2 are simply modified version of weightsAndrews and bwAndrews from the package sandwich. The modifications have been made so that the argument x can be a matrix instead of an object of class lm or glm. The details on how is works can be found on the sandwich manual.

If we want to estimate a model like $Y_t = \theta_1 + X_{2t}\theta_2 + ... + X_{k}\theta_k + \epsilon_t$ using the moment conditions $Cov(\epsilon_tH_t)=0$, where $H_t$ is a vector of $Nh$ instruments, than we can define "g" like we do for lm. We would have g = y~x2+x3+...+xk and the argument "x" above would become the matrix H of instruments. As for lm, $Y_t$ can be a $Ny \times 1$ vector which would imply that $k=Nh \times Ny$. The intercept is included by default so you do not have to add a column of ones to the matrix $H$. You do not need to provide the gradiant in that case since in that case it is embedded in gel.

If "smooth" is set to TRUE, the sample moment conditions $\sum_{t=1}^n g(\theta,x_t)$ is replaced by: $\sum_{t=1}^n g^k(\theta,x_t)$, where $g^k(\theta,x_t)=\sum_{i=-r}^r k(i) g(\theta,x_{t+i})$, where $r$ is a truncated parameter that depends on the bandwidth and $k(i)$ are normalized weights so that they sum to 1.

The method solves $\hat{\theta} = \arg\min \left[\arg\max_\lambda \frac{1}{n}\sum_{t=1}^n \rho() - \rho(0) \right]$

References

Anatolyev, S. (2005), GMM, GEL, Serial Correlation, and Asymptotic Bias. Econometrica, 73, 983-1002.

Andrews DWK (1991), Heteroskedasticity and Autocorrelation Consistent Covariance Matrix Estimation. Econometrica, 59, 817--858.

Kitamura, Yuichi (1997), Empirical Likelihood Methods With Weakly Dependent Processes. The Annals of Statistics, 25, 2084-2102.

Newey, W.K. and Smith, R.J. (2004), Higher Order Properties of GMM and Generalized Empirical Likelihood Estimators. Econometrica, 72, 219-255.

Newey WK & West KD (1987), A Simple, Positive Semi-Definite, Heteroskedasticity and Autocorrelation Consistent Covariance Matrix. Econometrica, 55, 703--708.

Newey WK & West KD (1994), Automatic Lag Selection in Covariance Matrix Estimation. Review of Economic Studies, 61, 631-653.

Schennach, Susanne, M. (2007), Point Estimation with Exponentially Tilted Empirical Likelihood. Econometrica, 35, 634-672.

Zeileis A (2006), Object-oriented Computation of Sandwich Estimators. Journal of Statistical Software, 16(9), 1--16. URL http://www.jstatsoft.org/v16/i09/.

Examples

Run this code
# First, an exemple with the fonction g()

g <- function(tet,x)
	{
	n <- nrow(x)
	u <- (x[7:n] - tet[1] - tet[2]*x[6:(n-1)] - tet[3]*x[5:(n-2)])
	f <- cbind(u,u*x[4:(n-3)],u*x[3:(n-4)],u*x[2:(n-5)],u*x[1:(n-6)])
	return(f)
	}

Dg <- function(tet,x)
	{
	n <- nrow(x)
	xx <- cbind(rep(1,(n-6)),x[6:(n-1)],x[5:(n-2)])
        H  <- cbind(rep(1,(n-6)),x[4:(n-3)],x[3:(n-4)],x[2:(n-5)],x[1:(n-6)])
	f <- -crossprod(H,xx)/(n-6)
	return(f)
	}
n = 200
phi<-c(.2,.7)
thet <- 0.2
sd <- .2
set.seed(123)
x <- matrix(arima.sim(n=n,list(order=c(2,0,1),ar=phi,ma=thet,sd=sd)),ncol=1)

res <- gel(g,x,c(0,.3,.6),grad=Dg)
summary(res)

# The same model but with g as a formula....  much simpler in that case

y <- x[7:n]
ym1 <- x[6:(n-1)]
ym2 <- x[5:(n-2)]

H <- cbind(x[4:(n-3)],x[3:(n-4)],x[2:(n-5)],x[1:(n-6)])
g <- y~ym1+ym2
x <- H

res <- gel(g,x,c(0,.3,.6))
summary(res)

Run the code above in your browser using DataLab