jive.est(y,X,Z,SE=FALSE,n.bt=100)The model is identical to the one used in the rest of this package. That is, the second-stage equation is modelled as $y = X\beta + \epsilon,$ in which $y$ is a vector of $n$ observations representing the outcome variable, $X$ is a matrix of order $n\times k$ denoting the predictors of the model, and comprised of both exogenous and endogenous variables, $\beta$ is the $k$-dimensional vector of parameters of interest; whereas $\epsilon$ is an unknown vector of error terms. Moreover, the first-stage level of the model is given by a multivariate multiple regression. That is, this is a linear modle with a multivariate outcome variable, as well as multiple predictors. This first-stage model is represented in this manner, $X = Z\Gamma + \Delta$, where $X$ is the matrix of predictors from the second-stage equation, $Z$ is a matrix of instrumental variables (IVs) of order $n \times l$, $\Gamma$ is a matrix of unknown parameters of order $l\times k$; whereas $\Delta$ denotes an unknown matrix of order $n\times k$ of error terms.
For computing the JIVE, we first consider the estimator of the regression parameter in the first-stage equation, which is denoted by $$\hat\Gamma := ({Z}^{T}{Z})^{-1}({Z}^{T}{X}).$$ This matrix is of order $l\times k$. The matrix of predictors, ${X}$, projected onto the column space of the instruments is then given by $\hat{X}={Z}\hat\Gamma$. The JIVE proceeds by estimating each row of $\hat{X}$ without using the corresponding data point. That is, the $i$th row in the jackknife matrix, $\hat{X}_{J}$, is estimated without using the $i$th row of ${X}$. This is conducted as follows. For every $i=1,\ldots,n$, we first compute $$\hat\Gamma_{(i)} := ({Z}_{(i)}^{T}{Z}_{(i)})^{-1}({Z}_{(i)}^{T}{X}_{(i)}),$$ where ${Z}_{(i)}$ and ${X}_{(i)}$ denote matrices ${Z}$ and ${X}$ after removal of the $i$th row, such that these two matrices are of order $(n-1)\times l$ and $(n-1)\times k$, respectively. Then, the matrix $\hat{X}_{J}$ is constructed by stacking these jackknife estimates of $\hat\Gamma$, after they have been pre-multiplied by the corresponding rows of ${Z}$, $$\hat{X}_{J} := ({z}_{1}\hat\Gamma_{(1)},\ldots,{z}_{n}\hat\Gamma_{(n)})^{T},$$ where each ${z}_{i}$ is an $l$-dimensional row vector. The JIVE estimator is then obtained by replacing $\hat{X}$ with $\hat{X}_{J}$ in the standard formula of the TSLS, such that $$\hat\beta_{J} := (\hat{X}_{J}{}^{T}{X})^{-1}(\hat{X}_{J}{}^{T}{y}).$$ In this package, we have additionally made use of the computational formula suggested by Angrist et al. (1999), in which each row of $\hat{X}_{J}$ is calculated using $${z}_{i}\hat\Gamma_{(i)} = \frac{{z}_{i}\hat\Gamma - h_{i}{x}_{i}}{1-h_{i}},$$ where ${z}_{i}\hat\Gamma_{(i)}$, ${z}_{i}\hat\Gamma$ and ${x}_{i}$ are $k$-dimensional row vectors; and with $h_{i}$ denoting the leverage of the corresponding data point in the first-level equation of our model, such that each $h_{i}$ is defined as ${z}_{i}({Z}^{T}{Z})^{-1}{z}_{i}^{T}$.
Angrist, J.D., Imbens, G.W., and Krueger, A.B. (1999). Jackknife instrumental variables estimation. Journal of Applied Econometrics, 14(1), 57--67.
### Generate a simple example with synthetic data, and no intercept.
n <- 100; k <- 3; l <- 3;
Ga<- diag(rep(1,l)); be <- rep(1,k);
Z <- matrix(0,n,l); for(j in 1:l) Z[,j] <- rnorm(n);
X <- matrix(0,n,k); for(j in 1:k) X[,j] <- Z[,j]*Ga[j,j] + rnorm(n);
y <- X%*%be + rnorm(n);
### Compute JIVE estimator with SEs and variance/covariance matrix.
print(jive.est(y,X,Z))
print(jive.est(y,X,Z,SE=TRUE));Run the code above in your browser using DataLab