Aggregation trees are a three-step procedure. First, the conditional average treatment effects (CATEs) are estimated using any
estimator. Second, a tree is grown to approximate the CATEs. Third, the tree is pruned to derive a nested sequence of optimal
groupings, one for each granularity level. For each level of granularity, we can obtain point estimation and inference about
the GATEs.
To implement this methodology, the user can rely on two core functions that handle the various steps.
Constructing the Sequence of Groupings
build_aggtree constructs the sequence of groupings (i.e., the tree) and estimate the GATEs in each node. The
GATEs can be estimated in several ways. This is controlled by the method argument. If method == "raw", we
compute the difference in mean outcomes between treated and control observations in each node. This is an unbiased estimator
in randomized experiment. If method == "aipw", we construct doubly-robust scores and average them in each node. This
is unbiased also in observational studies. Honest regression forests and 5-fold cross fitting are used to estimate the
propensity score and the conditional mean function of the outcome (unless the user specifies the argument scores).
The user can provide a vector of the estimated CATEs via the cates_tr and cates_hon arguments. If no CATEs are provided,
these are estimated internally via a causal_forest using only the training sample, that is, Y_tr, D_tr,
and X_tr.
GATEs Estimation and Inference
inference_aggtree takes as input an aggTrees object constructed by build_aggtree. Then, for
the desired granularity level, chosen via the n_groups argument, it provides point estimation and standard errors for
the GATEs. Additionally, it performs some hypothesis testing to assess whether we find systematic heterogeneity and computes
the average characteristics of the units in each group to investigate the driving mechanisms.
Point estimates and standard errors for the GATEs
GATEs and their standard errors are obtained by fitting an appropriate linear model. If method == "raw", we estimate
via OLS the following:
$$Y_i = \sum_{l = 1}^{|T|} L_{i, l} \gamma_l + \sum_{l = 1}^{|T|} L_{i, l} D_i \beta_l + \epsilon_i$$
with L_{i, l} a dummy variable equal to one if the i-th unit falls in the l-th group, and |T| the
number of groups. If the treatment is randomly assigned, one can show that the betas identify the GATE of
each group. However, this is not true in observational studies due to selection into treatment. In this case, the user is
expected to use method == "aipw" when calling build_aggtree. In this case,
inference_aggtree uses the scores in the following regression:
$$score_i = \sum_{l = 1}^{|T|} L_{i, l} \beta_l + \epsilon_i$$
This way, betas again identify the GATEs.
Regardless of method, standard errors are estimated via the Eicker-Huber-White estimator.
If boot_ci == TRUE, the routine also computes asymmetric bias-corrected and accelerated 95% confidence intervals using 2000 bootstrap
samples. Particularly useful when the honest sample is small-ish.
Hypothesis testing
inference_aggtree uses the standard errors obtained by fitting the linear models above to test the hypotheses
that the GATEs are different across all pairs of leaves. Here, we adjust p-values to account for multiple hypotheses testing
using Holm's procedure.
Average Characteristics
inference_aggtree regresses each covariate on a set of dummies denoting group membership. This way, we get the
average characteristics of units in each leaf, together with a standard error. Leaves are ordered in increasing order of their
predictions (from most negative to most positive). Standard errors are estimated via the Eicker-Huber-White estimator.
Caution on Inference
Regardless of the chosen method, both functions estimate the GATEs, the linear models, and the average characteristics
of units in each group using only observations in the honest sample. If the honest sample is empty (this happens when the
user either does not provide Y_hon, D_hon, and X_hon or sets them to NULL), the same data used to
construct the tree are used to estimate the above quantities. This is fine for prediction but invalidates inference.