step_texthash()
creates a specification of a recipe step that will
convert a token
variable into multiple numeric variables
using the hashing trick.
step_texthash(
recipe,
...,
role = "predictor",
trained = FALSE,
columns = NULL,
signed = TRUE,
num_terms = 1024L,
prefix = "texthash",
sparse = "auto",
keep_original_cols = FALSE,
skip = FALSE,
id = rand_id("texthash")
)
An updated version of recipe
with the new step added
to the sequence of existing steps (if any).
A recipes::recipe object. The step will be added to the sequence of operations for this recipe.
One or more selector functions to choose which
variables are affected by the step. See recipes::selections()
for more details.
For model terms created by this step, what analysis role should they be assigned?. By default, the function assumes that the new columns created by the original variables will be used as predictors in a model.
A logical to indicate if the quantities for preprocessing have been estimated.
A character string of variable names that will
be populated (eventually) by the terms
argument. This is NULL
until the step is trained by recipes::prep.recipe()
.
A logical, indicating whether to use a signed hash-function to reduce collisions when hashing. Defaults to TRUE.
An integer, the number of variables to output. Defaults to 1024.
A character string that will be the prefix to the resulting new variables. See notes below.
A single string. Should the columns produced be sparse vectors.
Can take the values "yes"
, "no"
, and "auto"
. If sparse = "auto"
then workflows can determine the best option. Defaults to "auto"
.
A logical to keep the original variables in the
output. Defaults to FALSE
.
A logical. Should the step be skipped when the
recipe is baked by recipes::bake.recipe()
? While all operations are baked
when recipes::prep.recipe()
is run, some operations may not be able to be
conducted on new data (e.g. processing the outcome variable(s)).
Care should be taken when using skip = FALSE
.
A character string that is unique to this step to identify it.
When you tidy()
this step, a tibble is returned with
columns terms
, value and id
:
character, the selectors or variables selected
logical, is it signed?
integer, number of terms
character, id of this step
This step has 2 tuning parameters:
signed
: Signed Hash Value (type: logical, default: TRUE)
num_terms
: # Hash Features (type: integer, default: 1024)
This step produces sparse columns if sparse = "yes"
is being set. The
default value "auto"
won't trigger production fo sparse columns if a recipe
is recipes::prep()
ed, but allows for a workflow to toggle to "yes"
or
"no"
depending on whether the model supports recipes::sparse_data and if
the model is is expected to run faster with the data.
The mechanism for determining how much sparsity is produced isn't perfect,
and there will be times when you want to manually overwrite by setting
sparse = "yes"
or sparse = "no"
.
The underlying operation does not allow for case weights.
Feature hashing, or the hashing trick, is a transformation of a text variable into a new set of numerical variables. This is done by applying a hashing function over the tokens and using the hash values as feature indices. This allows for a low memory representation of the text. This implementation is done using the MurmurHash3 method.
The argument num_terms
controls the number of indices that the hashing
function will map to. This is the tuning parameter for this transformation.
Since the hashing function can map two different tokens to the same index,
will a higher value of num_terms
result in a lower chance of collision.
The new components will have names that begin with prefix
, then
the name of the variable, followed by the tokens all separated by
-
. The variable names are padded with zeros. For example if
prefix = "hash"
, and if num_terms < 10
, their names will be
hash1
- hash9
. If num_terms = 101
, their names will be
hash001
- hash101
.
Kilian Weinberger; Anirban Dasgupta; John Langford; Alex Smola; Josh Attenberg (2009).
step_tokenize()
to turn characters into tokens
step_text_normalization()
to perform text normalization.
Other Steps for Numeric Variables From Tokens:
step_lda()
,
step_tf()
,
step_tfidf()
,
step_word_embeddings()
if (FALSE) { # all(c("modeldata", "text2vec", "data.table") %in% rownames(installed.packages()))
library(data.table)
data.table::setDTthreads(2)
Sys.setenv("OMP_THREAD_LIMIT" = 2)
library(recipes)
library(modeldata)
data(tate_text)
tate_rec <- recipe(~., data = tate_text) %>%
step_tokenize(medium) %>%
step_tokenfilter(medium, max_tokens = 10) %>%
step_texthash(medium)
tate_obj <- tate_rec %>%
prep()
bake(tate_obj, tate_text)
tidy(tate_rec, number = 3)
tidy(tate_obj, number = 3)
}
Run the code above in your browser using DataLab