For degrees of smoothness greater than 1, we must generate the lower order smoothness basis functions using the knot points at the "edge" of the hypercube. For example, consider f(x) = x^2 + x, which is second-order smooth, but will not be generated by purely quadratic basis functions. We also need to include the y = x function (which corresponds to first-order HAL basis functions at the left most value/edge of x).
enumerate_edge_basis(
x,
max_degree = 3,
smoothness_orders = rep(0, ncol(x)),
include_zero_order = FALSE,
include_lower_order = FALSE
)
An input matrix
containing observations and covariates
following standard conventions in problems of statistical learning.
The highest order of interaction terms for which the basis
functions ought to be generated. The default (NULL
) corresponds to
generating basis functions for the full dimensionality of the input matrix.
An integer vector of length ncol(x)
specifying the desired smoothness of the function in each covariate. k = 0
is no smoothness (indicator basis), k = 1 is first order smoothness, and so
on. For an additive model, the component function for each covariate will
have the degree of smoothness as specified by smoothness_orders. For
non-additive components (tensor products of univariate basis functions),
the univariate basis functions in each tensor product have smoothness
degree as specified by smoothness_orders.
A logical
, indicating whether the zeroth
order basis functions are included for each covariate (if TRUE
), in
addition to the smooth basis functions given by smoothness_orders
.
This allows the algorithm to data-adaptively choose the appropriate degree
of smoothness.
A logical
, like include_zero_order
,
except including all basis functions of lower smoothness degrees than
specified via smoothness_orders
.