textrecipes (version 0.3.0)

step_tokenize: Tokenization of character variables

Description

step_tokenize() creates a specification of a recipe step that will convert a character predictor into a tokenlist.

Usage

step_tokenize(
  recipe,
  ...,
  role = NA,
  trained = FALSE,
  columns = NULL,
  options = list(),
  token = "words",
  engine = "tokenizers",
  custom_token = NULL,
  skip = FALSE,
  id = rand_id("tokenize")
)

# S3 method for step_tokenize tidy(x, ...)

Arguments

recipe

A recipe object. The step will be added to the sequence of operations for this recipe.

...

One or more selector functions to choose variables. For step_tokenize(), this indicates the variables to be encoded into a tokenlist. See recipes::selections() for more details. For the tidy method, these are not currently used.

role

Not used by this step since no new variables are created.

trained

A logical to indicate if the recipe has been baked.

columns

A list of tibble results that define the encoding. This is NULL until the step is trained by recipes::prep.recipe().

options

A list of options passed to the tokenizer.

token

Unit for tokenizing. See details for options. Defaults to "words".

engine

Package that will be used for tokenization. See details for options. Defaults to "tokenizers".

custom_token

User supplied tokenizer. Use of this argument will overwrite the token and engine arguments. Must take a character vector as input and output a list of character vectors.

skip

A logical. Should the step be skipped when the recipe is baked by recipes::bake.recipe()? While all operations are baked when recipes::prep.recipe() is run, some operations may not be able to be conducted on new data (e.g. processing the outcome variable(s)). Care should be taken when using skip = TRUE as it may affect the computations for subsequent operations.

id

A character string that is unique to this step to identify it

x

A step_tokenize object.

Value

An updated version of recipe with the new step added to the sequence of existing steps (if any).

Details

Tokenization is the act of splitting a character string into smaller parts to be further analysed. This step uses the tokenizers package which includes heuristics to split the text into paragraphs tokens, word tokens amoug others. textrecipes keeps the tokens in a tokenlist and other steps will do their tasks on those tokenlists before transforming them back to numeric.

The choice of engine determines the possible choices of token.

If engine = "tokenizers":

  • "words" (default)

  • "characters"

  • "character_shingles"

  • "ngrams"

  • "skip_ngrams"

  • "sentences"

  • "lines"

  • "paragraphs"

  • "regex"

  • "tweets"

  • "ptb" (Penn Treebank)

  • "skip_ngrams"

  • "word_stems"

if engine = "spacyr"

  • "words"

Working will textrecipes will almost always start by calling step_tokenize followed by modifying and filtering steps. This is not always the case as you sometimes want to do apply pre-tokenization steps, this can be done with recipes::step_mutate().

See Also

step_untokenize() to untokenize.

Examples

Run this code
# NOT RUN {
library(recipes)
library(modeldata)
data(okc_text)

okc_rec <- recipe(~ ., data = okc_text) %>%
  step_tokenize(essay0) 
  
okc_obj <- okc_rec %>%
  prep()

juice(okc_obj, essay0) %>%
  slice(1:2)

juice(okc_obj) %>%
  slice(2) %>%
  pull(essay0)
  
tidy(okc_rec, number = 1)
tidy(okc_obj, number = 1)

okc_obj_chars <- recipe(~ ., data = okc_text) %>%
  step_tokenize(essay0, token = "characters") %>%
  prep()

juice(okc_obj_chars) %>%
  slice(2) %>%
  pull(essay0)
# }

Run the code above in your browser using DataLab