step_feature_hash creates a a specification of a recipe step that will
convert nominal data (e.g. character or factors) into one or more numeric
binary columns using the levels of the original data.
step_feature_hash(
recipe,
...,
role = "predictor",
trained = FALSE,
num_hash = 2^6,
preserve = FALSE,
columns = NULL,
skip = FALSE,
id = rand_id("feature_hash")
)# S3 method for step_feature_hash
tidy(x, ...)
A recipe object. The step will be added to the sequence of operations for this recipe.
One or more selector functions to choose which factor
variables will be used to create the dummy variables. See selections() for
more details. The selected variables must be factors. For the tidy method,
these are not currently used.
For model terms created by this step, what analysis role should they be assigned?. By default, the function assumes that the binary dummy variable columns created by the original variables will be used as predictors in a model.
A logical to indicate if the quantities for preprocessing have been estimated.
The number of resulting dummy variable columns.
A single logical; should the selected column(s) be retained (in addition to the new dummy variables)?
A character vector for the selected columns. This is NULL
until the step is trained by recipes::prep.recipe().
A logical. Should the step be skipped when the recipe is baked
by recipes::bake.recipe()? While all operations are baked when
recipes::prep.recipe() is run, some operations may not be able to be
conducted on new data (e.g. processing the outcome variable(s)). Care should
be taken when using skip = TRUE as it may affect the computations for
subsequent operations.
A character string that is unique to this step to identify it.
A step_feature_hash object.
An updated version of recipe with the new step added to the
sequence of existing steps (if any). For the tidy method, a tibble with
columns terms (the selectors or original variables selected).
step_feature_hash() will create a set of binary dummy variables
from a factor or character variable. The values themselves are used to
determine which row that the dummy variable should be assigned (as opposed
to having a specific column that the value will map to).
Since this method does not rely on a pre-determined assignment of levels to columns, new factor levels can be added to the selected columns without issue. Missing values result in missing values for all of the hashed columns.
Note that the assignment of the levels to the hashing columns does not try
to maximize the allocation. It is likely that multiple levels of the column
will map to the same hashed columns (even with small data sets). Similarly,
it is likely that some columns will have all zeros. A zero-variance filter
(via recipes::step_zv()) is recommended for any recipe that uses hashed
columns.
Weinberger, K, A Dasgupta, J Langford, A Smola, and J Attenberg. 2009. "Feature Hashing for Large Scale Multitask Learning." In Proceedings of the 26th Annual International Conference on Machine Learning, 1113<U+2013>20. ACM.
Kuhn and Johnson (2020) Feature Engineering and Selection: A Practical Approach for Predictive Models. CRC/Chapman Hall https://bookdown.org/max/FES/encoding-predictors-with-many-categories.html
# NOT RUN {
data(okc, package = "modeldata")
# This may take a while:
rec <-
recipe(Class ~ age + location, data = okc) %>%
step_feature_hash(location, num_hash = 2^6, preserve = TRUE) %>%
prep()
# How many of the 135 locations ended up in each hash column?
results <-
juice(rec, starts_with("location")) %>%
distinct()
apply(results %>% select(-location), 2, sum) %>% table()
# }
Run the code above in your browser using DataLab