Learn R Programming

textrecipes (version 0.3.0)

step_lemma: Lemmatization of tokenlist variables

Description

step_lemma creates a specification of a recipe step that will extract the lemmatization of a tokenlist.

Usage

step_lemma(
  recipe,
  ...,
  role = NA,
  trained = FALSE,
  columns = NULL,
  skip = FALSE,
  id = rand_id("lemma")
)

# S3 method for step_lemma tidy(x, ...)

Arguments

recipe

A recipe object. The step will be added to the sequence of operations for this recipe.

...

One or more selector functions to choose variables. For step_lemma, this indicates the variables to be encoded into a tokenlist. See recipes::selections() for more details. For the tidy method, these are not currently used.

role

Not used by this step since no new variables are created.

trained

A logical to indicate if the recipe has been baked.

columns

A list of tibble results that define the encoding. This is NULL until the step is trained by recipes::prep.recipe().

skip

A logical. Should the step be skipped when the recipe is baked by recipes::bake.recipe()? While all operations are baked when recipes::prep.recipe() is run, some operations may not be able to be conducted on new data (e.g. processing the outcome variable(s)). Care should be taken when using skip = TRUE as it may affect the computations for subsequent operations.

id

A character string that is unique to this step to identify it.

x

A step_lemma object.

Value

An updated version of recipe with the new step added to the sequence of existing steps (if any).

Details

This stem doesn't perform lemmatization by itself, but rather lets you extract the lemma attribute of the tokenlist. To be able to use step_lemma you need to use a tokenization method that includes lemmatization. Currently using the "spacyr" engine in step_tokenize() provides lemmatization and works well with step_lemma.

See Also

step_tokenize() to turn character into tokenlist.

Other tokenlist to tokenlist steps: step_ngram(), step_pos_filter(), step_stem(), step_stopwords(), step_tokenfilter(), step_tokenmerge()

Examples

Run this code
# NOT RUN {
library(recipes)

short_data <- data.frame(text = c("This is a short tale,",
                                  "With many cats and ladies."))

okc_rec <- recipe(~ text, data = short_data) %>%
  step_tokenize(text, engine = "spacyr") %>%
  step_lemma(text) %>%
  step_tf(text)
  
okc_obj <- prep(okc_rec)
  
juice(okc_obj)
# }

Run the code above in your browser using DataLab