step_tf
creates a specification of a recipe step that
will convert a tokenlist into multiple variables containing
the token counts.
step_tf(
recipe,
...,
role = "predictor",
trained = FALSE,
columns = NULL,
weight_scheme = "raw count",
weight = 0.5,
vocabulary = NULL,
res = NULL,
prefix = "tf",
skip = FALSE,
id = rand_id("tf")
)# S3 method for step_tf
tidy(x, ...)
A recipe object. The step will be added to the sequence of operations for this recipe.
One or more selector functions to choose variables.
For step_tf
, this indicates the variables to be encoded
into a tokenlist. See recipes::selections()
for more
details. For the tidy
method, these are not currently used.
For model terms created by this step, what analysis role should they be assigned?. By default, the function assumes that the new columns created by the original variables will be used as predictors in a model.
A logical to indicate if the recipe has been baked.
A list of tibble results that define the
encoding. This is NULL
until the step is trained by
recipes::prep.recipe()
.
A character determining the weighting scheme for the term frequency calculations. Must be one of "binary", "raw count", "term frequency", "log normalization" or "double normalization". Defaults to "raw count".
A numeric weight used if weight_scheme
is set to
"double normalization". Defaults to 0.5.
A character vector of strings to be considered.
The words that will be used to calculate the term
frequency will be stored here once this preprocessing step has
be trained by prep.recipe()
.
A character string that will be the prefix to the resulting new variables. See notes below
A logical. Should the step be skipped when the
recipe is baked by recipes::bake.recipe()
? While all
operations are baked when recipes::prep.recipe()
is run, some
operations may not be able to be conducted on new data (e.g.
processing the outcome variable(s)). Care should be taken when
using skip = TRUE
as it may affect the computations for
subsequent operations.
A character string that is unique to this step to identify it.
A step_tf
object.
An updated version of recipe
with the new step added
to the sequence of existing steps (if any).
It is strongly advised to use step_tokenfilter before using step_tf to limit the number of variables created, otherwise you might run into memory issues. A good strategy is to start with a low token count and go up according to how much RAM you want to use.
Term frequency is a weight of how many times each token appear in each
observation. There are different ways to calculate the weight and this
step can do it in a couple of ways. Setting the argument weight_scheme
to
"binary" will result in a set of binary variables denoting if a token
is present in the observation. "raw count" will count the times a token
is present in the observation. "term frequency" will divide the count
with the total number of words in the document to limit the effect
of the document length as longer documents tends to have the word present
more times but not necessarily at a higher percentage. "log normalization"
takes the log of 1 plus the count, adding 1 is done to avoid taking log of
0. Finally "double normalization" is the raw frequency divided by the raw
frequency of the most occurring term in the document. This is then
multiplied by weight
and weight
is added to the result. This is again
done to prevent a bias towards longer documents.
The new components will have names that begin with prefix
, then
the name of the variable, followed by the tokens all separated by
-
. The new variables will be created alphabetically according to
token.
step_tokenize()
to turn character into tokenlist.
Other tokenlist to numeric steps:
step_texthash()
,
step_tfidf()
,
step_word_embeddings()
# NOT RUN {
library(recipes)
library(modeldata)
data(okc_text)
okc_rec <- recipe(~ ., data = okc_text) %>%
step_tokenize(essay0) %>%
step_tf(essay0)
okc_obj <- okc_rec %>%
prep()
bake(okc_obj, okc_text)
tidy(okc_rec, number = 2)
tidy(okc_obj, number = 2)
# }
Run the code above in your browser using DataLab