The command implements estimation and inference procedures for Synthetic Control (SC) methods using least squares, lasso, ridge, or simplex-type constraints. Uncertainty is quantified using prediction
intervals according to Cattaneo, Feng, and Titiunik (2021). scpi
returns the estimated
post-treatment series for the synthetic unit through the command scest
and quantifies in-sample and out-of-sample uncertainty to provide confidence intervals
for each point estimate.
Companion Stata and Python packages are described in Cattaneo, Feng, Palomba, and Titiunik (2022).
Companion commands are: scdata and scdataMulti for data preparation in the single and multiple treated unit(s) cases, respectively, scest for point estimation, scplot and scplotMulti for plots in the single and multiple treated unit(s) cases, respectively.
Related Stata, R, and Python packages useful for inference in SC designs are described in the following website:
https://nppackages.github.io/scpi/
For an introduction to synthetic control methods, see Abadie (2021) and references therein.
scpi(
data,
w.constr = NULL,
V = "separate",
V.mat = NULL,
solver = "ECOS",
P = NULL,
u.missp = TRUE,
u.sigma = "HC1",
u.order = 1,
u.lags = 0,
u.design = NULL,
u.alpha = 0.05,
e.method = "all",
e.order = 1,
e.lags = 0,
e.design = NULL,
e.alpha = 0.05,
sims = 200,
rho = NULL,
rho.max = 0.2,
cores = 1,
plot = FALSE,
plot.name = NULL,
w.bounds = NULL,
e.bounds = NULL,
force.joint.PI.optim = FALSE,
save.data = NULL,
verbose = TRUE
)
The function returns an object of class 'scpi' containing three lists. The first list is labeled 'data' and contains used
data as returned by scdata
and some other values.
a matrix containing pre-treatment features of the treated unit(s).
a matrix containing pre-treatment features of the control units.
a matrix containing covariates for adjustment.
a matrix whose rows are the vectors used to predict the out-of-sample series for the synthetic unit(s).
a matrix containing the pre-treatment outcome of the treated unit(s).
a matrix containing the post-treatment outcome of the treated unit(s).
a matrix containing the aggregate pre-treatment outcome of the treated unit(s). This differs from
Y.pre only in the case 'effect' in scdataMulti()
is set to either 'unit' or 'time'.
a matrix containing the aggregate post-treatment outcome of the treated unit(s). This differs from
Y.post only in the case 'effect' in scdataMulti()
is set to either 'unit' or 'time'.
a matrix containing the pre-treatment outcome of the control units.
a list containing some specifics of the data:
J
, the number of control units
K
, a numeric vector with the number of covariates used for adjustment for each feature
M
, number of features
KM
, the total number of covariates used for adjustment
KMI
, the total number of covariates used for adjustment
I
, number of treated unit(s)
period.pre
, a numeric vector with the pre-treatment period
period.post
, a numeric vector with the post-treatment period
T0.features
, a numeric vector with the number of periods used in estimation for each feature
T1.outcome
, the number of post-treatment periods
constant
, for internal use only
effect
, for internal use only
anticipation
, number of periods of potential anticipation effects
out.in.features
, for internal use only
treated.units
, list containing the IDs of all treated units
donors.list
, list containing the IDs of the donors of each treated unit
The second list is labeled 'est.results' containing all the results from scest
.
a matrix containing the estimated weights of the donors.
a matrix containing the values of the covariates used for adjustment.
a matrix containing \(\mathbf{w}\) and \(\mathbf{r}\).
a matrix containing the estimated pre-treatment outcome of the SC unit(s).
a matrix containing the estimated post-treatment outcome of the SC unit(s).
a matrix containing the predicted values of the features of the treated unit(s).
a matrix containing the residuals \(\mathbf{A}-\widehat{\mathbf{A}}\).
a matrix containing the weighting matrix used in estimation.
a list containing the specifics of the constraint set used on the weights.
The third list is labeled 'inference.results' and contains all the inference-related results.
a matrix containing the prediction intervals taking only in-sample uncertainty in to account.
a matrix containing the prediction intervals estimating out-of-sample uncertainty with sub-Gaussian bounds.
a matrix containing the prediction intervals estimating out-of-sample uncertainty with a location-scale model.
a matrix containing the prediction intervals estimating out-of-sample uncertainty with quantile regressions.
a list containing the estimated bounds (in-sample and out-of-sample uncertainty).
a matrix containing the estimated (conditional) variance-covariance \(\boldsymbol{\Sigma}\).
a matrix containing the estimated (conditional) mean of the pseudo-residuals \(\mathbf{u}\).
a matrix containing the estimated (conditional) variance-covariance of the pseudo-residuals \(\mathbf{u}\).
a matrix containing the estimated (conditional) mean of the out-of-sample error \(e\).
a matrix containing the estimated (conditional) variance of the out-of-sample error \(e\).
a logical indicating whether the model has been treated as misspecified or not.
an integer containing the number of lags in B used in predicting moments of the pseudo-residuals \(\mathbf{u}\).
an integer containing the order of the polynomial in B used in predicting moments of the pseudo-residuals \(\mathbf{u}\).
a string indicating the estimator used for Sigma
.
a logical indicating whether the design matrix to predict moments of \(\mathbf{u}\) was user-provided.
a scalar indicating the number of observations used to predict moments of \(\mathbf{u}\).
a scalar indicating the number of parameters used to predict moments of \(\mathbf{u}\).
the design matrix used to predict moments of \(\mathbf{u}\),
a scalar determining the confidence level used for in-sample uncertainty, i.e. 1-u.alpha
is the confidence level.
a string indicating the specification used to predict moments of the out-of-sample error \(e\).
an integer containing the number of lags in B used in predicting moments of the out-of-sample error \(e\).
an integer containing the order of the polynomial in B used in predicting moments of the out-of-sample error \(e\).
a logical indicating whether the design matrix to predict moments of \(e\) was user-provided.
a scalar indicating the number of observations used to predict moments of \(\mathbf{u}\).
a scalar indicating the number of parameters used to predict moments of \(\mathbf{u}\).
a scalar determining the confidence level used for out-of-sample uncertainty, i.e. 1-e.alpha
is the confidence level.
the design matrix used to predict moments of \(\mathbf{u}\),
an integer specifying the estimated regularizing parameter that imposes sparsity on the estimated vector of weights.
a list containing the regularized constraint on the norm.
an integer indicating the number of simulations used in quantifying in-sample uncertainty.
a matrix containing the percentage of failed simulations per post-treatment period to estimate lower and upper bounds.
a class 'scdata' object, obtained by calling scdata
, or class 'scdataMulti' obtained via scdataMulti
.
a list specifying the constraint set the estimated weights of the donors must belong to.
w.constr
can contain up to five elements:
`p
', a scalar indicating the norm to be used (p
should be one of "no norm", "L1", and "L2")
`dir
', a string indicating whether the constraint on the norm is an equality ("==") or inequality ("<=")
`Q
', a scalar defining the value of the constraint on the norm
`lb
', a scalar defining the lower bound on the weights. It can be either 0 or -Inf
.
`name
', a character selecting one of the default proposals
See the Details section for more.
specifies the type of weighting matrix to be used when minimizing the sum of squared residuals
$$(\mathbf{A}-\mathbf{B}\mathbf{w}-\mathbf{C}\mathbf{r})'\mathbf{V}(\mathbf{A}-\mathbf{B}\mathbf{w}-\mathbf{C}\mathbf{r})$$
The default is the identity matrix, so equal weight is given to all observations. In the case of multiple treated observations
(you used scdataMulti
to prepare the data), the user can specify V
as a string equal to either "separate" or "pooled".
If scdata()
was used to prepare the data, V
is automatically set to "separate" as the two options are
equivalent. See the Details section for more.
A conformable weighting matrix \(\mathbf{V}\) to be used in the minimization of the sum of squared residuals $$(\mathbf{A}-\mathbf{B}\mathbf{w}-\mathbf{C}\mathbf{r})'\mathbf{V}(\mathbf{A}-\mathbf{B}\mathbf{w}-\mathbf{C}\mathbf{r}).$$ See the Details section for more information on how to prepare this matrix.
a string containing the name of the solver used by CVXR
when computing the weights. You can check which solvers are available
on your machine by running CVXR::installed_solvers()
. More information on what different solvers do can be found
at the following link https://cvxr.rbind.io/cvxr_examples/cvxr_using-other-solvers/. "OSQP" is the default solver when 'lasso'
is the constraint type, whilst "ECOS" is the default in all other cases.
a \(I\cdot T_1\times I\cdot (J+KM)\) matrix containing the design matrix to be used to obtain the predicted. post-intervention outcome of the synthetic control unit. \(T_1\) is the number of post-treatment periods, \(J\) is the size of the donor pool, and \(K_1\) is the number of covariates used for adjustment in the outcome equation.
a logical indicating if misspecification should be taken into account when dealing with \(\mathbf{u}\).
a string specifying the type of variance-covariance estimator to be used when estimating the conditional variance of \(\mathbf{u}\).
a scalar that sets the order of the polynomial in \(\mathbf{B}\) when predicting moments of \(\mathbf{u}\).
The default is u.order = 1
, however if there is risk of over-fitting, the command automatically sets it
to u.order = 0
. See the Details section for more information.
a scalar that sets the number of lags of \(\mathbf{B}\) when predicting moments of \(\mathbf{u}\).
The default is u.lags = 0
, however if there is risk of over-fitting, the command automatically sets it
to u.lags = 0
. See the Details section for more information.
a matrix with the same number of rows of \(\mathbf{A}\) and \(\mathbf{B}\) and whose columns specify the design matrix to be used when modeling the estimated pseudo-true residuals \(\mathbf{u}\).
a scalar specifying the confidence level for in-sample uncertainty, i.e. 1 - u.alpha
is the confidence level.
a string selecting the method to be used in quantifying out-of-sample uncertainty among: "gaussian" which uses conditional subgaussian bounds; "ls" which specifies a location-scale model for \(\mathbf{u}\); "qreg" which employs a quantile regressions to get the conditional bounds; "all" uses each one of the previous methods.
a scalar that sets the order of the polynomial in \(\mathbf{B}\) when predicting moments of \(\mathbf{e}\).
The default is e.order = 1
, however if there is risk of over-fitting, the command automatically sets it
to e.order = 0
. See the Details section for more information.
a scalar that sets the number of lags of \(\mathbf{B}\) when predicting moments of \(\mathbf{e}\).
The default is e.order = 1
, however if there is risk of over-fitting, the command automatically sets it
to e.order = 0
. See the Details section for more information.
a matrix with the same number of rows of \(\mathbf{A}\) and \(\mathbf{B}\) and whose columns specify the design matrix to be used when modeling the estimated out-of-sample residuals \(\mathbf{e}\).
a scalar specifying the confidence level for out-of-sample uncertainty, i.e. 1 - e.alpha
is the confidence level.
a scalar providing the number of simulations to be used in quantifying in-sample uncertainty.
a string specifying the formula used for the regularizing parameter that imposes sparsity on the estimated vector of
weights. Users can provide a scalar with their own value for rho
. Other options are described in the Details section.
a scalar indicating the maximum value attainable by the tuning parameter rho
.
number of cores to be used by the command. The default is one. When the weighting matrix \(\mathbf{V}\) is diagonal this option has no effect.
a logical specifying whether scplot
should be called and a plot saved in the current working
directory. For more options see scplot
.
a string containing the name of the plot (the format is by default .png). For more options see scplot
.
a \(N_1\cdot T_1\times 2\) matrix with the user-provided bounds on \(\beta\). If w.bounds
is provided, then
the quantification of in-sample uncertainty is skipped. It is possible to provide only the lower bound or the upper bound
by filling the other column with NA
s.
a \(N_1\cdot T_1\times 2\) matrix with the user-provided bounds on \((\widehat{\mathbf{w}},
\widehat{\mathbf{r}})^{\prime}\). If e.bounds
is provided, then
the quantification of out-of-sample uncertainty is skipped. It is possible to provide only the lower bound or the upper bound
by filling the other column with NA
s.
this option is here mostly for backward-compatibility. If FALSE (the default) it solves a separate optimization problem for each treated unit when it comes to quantify in-sample uncertainty as long as the weighting matrix \(\mathbf{V}\) is diagonal. If TRUE it solves a joint optimization problem for all treated units to quantify in-sample uncertainty. Both are valid approaches as we detail in the main paper (Cattaneo, Feng, Palomba, and Titiunik (2024)). The former is faster and less conservative.
a character specifying the name and the path of the saved dataframe containing the processed data used to produce the plot.
if TRUE
prints additional information in the console.
Matias Cattaneo, Princeton University. cattaneo@princeton.edu.
Yingjie Feng, Tsinghua University. fengyj@sem.tsinghua.edu.cn.
Filippo Palomba, Princeton University (maintainer). fpalomba@princeton.edu.
Rocio Titiunik, Princeton University. titiunik@princeton.edu.
Information is provided for the simple case in which \(N_1=1\) if not specified otherwise.
Estimation of Weights. w.constr
specifies the constraint set on the weights. First, the element
p
allows the user to choose between imposing a constraint on either the L1 (p = "L1"
)
or the L2 (p = "L2"
) norm of the weights and imposing no constraint on the norm (p = "no norm"
).
Second, Q
specifies the value of the constraint on the norm of the weights.
Third, lb
sets the lower bound of each component of the vector of weights.
Fourth, dir
sets the direction of the constraint on the norm in case p = "L1"
or p = "L2"
. If dir = "=="
, then
$$||\mathbf{w}||_p = Q,\:\:\: w_j \geq lb,\:\: j =1,\ldots,J$$
If instead dir = "<="
, then
$$||\mathbf{w}||_p \leq Q,\:\:\: w_j \geq lb,\:\: j =1,\ldots,J$$
If instead dir = "NULL"
no constraint on the norm of the weights is imposed.An alternative to specifying an ad-hoc constraint set on the weights would be
choosing among some popular types of constraints. This can be done by including the element
`name
' in the list w.constr
. The following are available options:
If name == "simplex"
(the default), then
$$||\mathbf{w}||_1 = 1,\:\:\: w_j \geq 0,\:\: j =1,\ldots,J.$$
If name == "lasso"
, then
$$||\mathbf{w}||_1 \leq Q,$$
where Q
is by default equal to 1 but it can be provided as an element of the list (eg. w.constr =
list(name = "lasso", Q = 2)
).
If name == "ridge"
, then
$$||\mathbf{w}||_2 \leq Q,$$
where \(Q\) is a tuning parameter that is by default computed as
$$(J+KM) \widehat{\sigma}_u^{2}/||\widehat{\mathbf{w}}_{OLS}||_{2}^{2}$$
where \(J\) is the number of donors and \(KM\) is the total number of covariates used for adjustment.
The user can provide Q
as an element of the list (eg. w.constr =
list(name = "ridge", Q = 1)
).
If name == "ols"
, then the problem is unconstrained and the vector of weights
is estimated via ordinary least squares.
If name == "L1-L2"
, then
$$||\mathbf{w}||_1 = 1,\:\:\: ||\mathbf{w}||_2 \leq Q, \:\:\: w_j \geq 0,\:\: j =1,\ldots,J.$$
where \(Q\) is a tuning parameter computed as in the "ridge" case.
Weighting Matrix.
if V <- "separate"
, then \(\mathbf{V} = \mathbf{I}\) and the minimized objective function is
$$\sum_{i=1}^{N_1} \sum_{l=1}^{M} \sum_{t=1}^{T_{0}}\left(a_{t, l}^{i}-\mathbf{b}_{t, l}^{{i \prime }} \mathbf{w}^{i}-\mathbf{c}_{t, l}^{{i \prime}} \mathbf{r}_{l}^{i}\right)^{2},$$
which optimizes the separate fit for each treated unit.
if V <- "pooled"
, then \(\mathbf{V} = \mathbf{1}\mathbf{1}'\otimes \mathbf{I}\) and the minimized objective function is
$$\sum_{l=1}^{M} \sum_{t=1}^{T_{0}}\left(\frac{1}{N_1^2} \sum_{i=1}^{N_1}\left(a_{t, l}^{i}-\mathbf{b}_{t, l}^{i \prime} \mathbf{w}^{i}-\mathbf{c}_{t, l}^{i\prime} \mathbf{r}_{l}^{i}\right)\right)^{2},$$
which optimizes the pooled fit for the average of the treated units.
if the user wants to provide their own weighting matrix, then it must use the option V.mat
to input a \(v\times v\) positive-definite matrix, where \(v\) is the
number of rows of \(\mathbf{B}\) (or \(\mathbf{C}\)) after potential missing values have been removed. In case the user
wants to provide their own V
, we suggest to check the appropriate dimension \(v\) by inspecting the output
of either scdata
or scdataMulti
and check the dimensions of \(\mathbf{B}\) (and \(\mathbf{C}\)). Note that
the weighting matrix could cause problems to the optimizer if not properly scaled. For example, if \(\mathbf{V}\) is diagonal
we suggest to divide each of its entries by \(\|\mathrm{diag}(\mathbf{V})\|_1\).
Regularization. rho
is estimated through the formula
$$\varrho = \sqrt{d_0\log(d)\log(T_0)}\mathcal{C}T_0^{-1/2}$$
where \(d\) is the dimension of \(\widehat{\boldsymbol{\beta}}\) and \(d_0\) denote the number of nonzeros in \(\widehat{\boldsymbol{\beta}}\)
\(\mathcal{C} = \widehat{\sigma}_u / \min_j \widehat{\sigma}_{b_j}\) if rho = 'type-1'
and
\(\mathcal{C} = \max_{j}\widehat{\sigma}_{b_j}\widehat{\sigma}_{u} / \min_j \widehat{\sigma}_{b_j}^2\) if rho = 'type-2'
,
rho = 'type-2'
is the default option from version 3.0.0 onwards, while previously 'type-1' was the default option. rho
defines a new sparse weight vector as
$$\widehat{w}^\star_j = \mathbf{1}(\widehat{w}_j\geq \varrho)$$
In-sample uncertainty. To quantify in-sample uncertainty it is necessary to model the pseudo-residuals \(\mathbf{u}\).
First of all, estimation of the first moment of \(\mathbf{u}\) can be controlled through
the option u.missp
. When u.missp = FALSE
, then \(\mathbf{E}[u\: |\: \mathbf{D}_u]=0\). If instead u.missp = TRUE
,
then \(\mathbf{E}[\mathbf{u}\: |\: \mathbf{D}_u]\) is estimated using a linear regression of
\(\widehat{\mathbf{u}}\) on \(\mathbf{D}_u\). The default set of variables in \(\mathbf{D}_u\) is composed of \(\mathbf{B}\),
\(\mathbf{C}\) and, if required, it is augmented with lags (u.lags
) and polynomials (u.order
) of \(\mathbf{B}\).
The option u.design
allows the user to provide an ad-hoc set of variables to form \(\mathbf{D}_u\).
Regarding the second moment of \(\mathbf{u}\), different estimators can be chosen:
HC0, HC1, HC2, HC3, and HC4 using the option u.sigma
.
Out-of-sample uncertainty. To quantify out-of-sample uncertainty it is necessary to model the out-of-sample residuals
\(\mathbf{e}\) and estimate relevant moments. By default, the design matrix used during estimation \(\mathbf{D}_e\) is composed of the blocks in
\(\mathbf{B}\) and \(\mathbf{C}\) corresponding to the outcome variable. Moreover, if required by the user, \(\mathbf{D}_e\)
is augmented with lags (e.lags
) and polynomials (e.order
) of \(\mathbf{B}\). The option e.design
allows the user to provide an
ad-hoc set of variables to form \(\mathbf{D}_e\). Finally, the option e.method
allows the user to select one of three
estimation methods: "gaussian" relies on conditional sub-Gaussian bounds; "ls" estimates conditional bounds using a location-scale
model; "qreg" uses conditional quantile regression of the residuals \(\mathbf{e}\) on \(\mathbf{D}_e\).
Residual Estimation Over-fitting. To estimate conditional moments of \(\mathbf{u}\) and \(e_t\)
we rely on two design matrices, \(\mathbf{D}_u\) and \(\mathbf{D}_e\) (see above). Let \(d_u\) and \(d_e\) be the number of
columns in \(\mathbf{D}_u\) and \(\mathbf{D}_e\), respectively. Assuming no missing values and balanced features, the
number of observation used to estimate moments of \(\mathbf{u}\) is \(N_1\cdot T_0\cdot M\), whilst for moments of \(e_t\) is \(T_0\).
Our rule of thumb to avoid over-fitting is to check if \(N_1\cdot T_0\cdot M \geq d_u + 10\) or \(T_0 \geq d_e + 10\). If the
former condition is not satisfied we automatically set u.order = u.lags = 0
, if instead the latter is not met
we automatically set e.order = e.lags = 0
.
Abadie, A. (2021). Using synthetic controls: Feasibility, data requirements, and methodological aspects. Journal of Economic Literature, 59(2), 391-425.
Cattaneo, M. D., Feng, Y., and Titiunik, R. (2021). Prediction intervals for synthetic control methods. Journal of the American Statistical Association, 116(536), 1865-1880.
Cattaneo, M. D., Feng, Y., Palomba F., and Titiunik, R. (2022). scpi: Uncertainty Quantification for Synthetic Control Methods, arXiv:2202.05984.
Cattaneo, M. D., Feng, Y., Palomba F., and Titiunik, R. (2022). Uncertainty Quantification in Synthetic Controls with Staggered Treatment Adoption, arXiv:2210.05026.
scdata
, scdataMulti
, scest
, scplot
, scplotMulti
data <- scpi_germany
df <- scdata(df = data, id.var = "country", time.var = "year",
outcome.var = "gdp", period.pre = (1960:1990),
period.post = (1991:2003), unit.tr = "West Germany",
unit.co = setdiff(unique(data$country), "West Germany"),
constant = TRUE, cointegrated.data = TRUE)
result <- scpi(df, w.constr = list(name = "simplex", Q = 1), cores = 1, sims = 10)
result <- scpi(df, w.constr = list(lb = 0, dir = "==", p = "L1", Q = 1),
cores = 1, sims = 10)
Run the code above in your browser using DataLab