Learn R Programming

textrecipes (version 0.1.0)

step_word_embeddings: Pretrained word embeddings of tokens

Description

`step_word_embeddings` creates a *specification* of a recipe step that will convert a list of tokens into word-embedding dimensions by aggregating the vectors of each token from a pre-trained embedding.

Usage

step_word_embeddings(
  recipe,
  ...,
  role = "predictor",
  trained = FALSE,
  columns = NULL,
  embeddings,
  aggregation = c("sum", "mean", "min", "max"),
  prefix = "w_embed",
  skip = FALSE,
  id = rand_id("word_embeddings")
)

# S3 method for step_word_embeddings tidy(x, ...)

Arguments

recipe

A recipe object. The step will be added to the sequence of operations for this recipe.

...

One or more selector functions to choose variables. For `step_word_embeddings`, this indicates the variables to be encoded into a list column. See [recipes::selections()] for more details. For the `tidy` method, these are not currently used.

role

For model terms created by this step, what analysis role should they be assigned?. By default, the function assumes that the new columns created by the original variables will be used as predictors in a model.

trained

A logical to indicate if the recipe has been baked.

columns

A list of tibble results that define the encoding. This is `NULL` until the step is trained by [recipes::prep.recipe()].

embeddings

A tibble of pre-trained word embeddings, such as those returned by the embedding_glove function function from the textdata pacakge. The first column should contain tokens, and additional columns should contain embeddings vectors.

aggregation

A character giving the name of the aggregation function to use.

prefix

A character string that will be the prefix to the resulting new variables. See notes below.

skip

A logical. Should the step be skipped when the recipe is baked by [recipes::bake.recipe()]? While all operations are baked when [recipes::prep.recipe()] is run, some operations may not be able to be conducted on new data (e.g. processing the outcome variable(s)). Care should be taken when using `skip = TRUE` as it may affect the computations for subsequent operations.

id

A character string that is unique to this step to identify it.

x

A `step_word_embeddings` object.

Value

An updated version of `recipe` with the new step added to the sequence of existing steps (if any).

Details

Word embeddings map words (or other tokens) into a high-dimensional feature space. This function maps pre-trained word embeddings onto the tokens in your data.

The argument `embeddings` provides the pre-trained vectors. Each dimension present in this tibble becomes a new feature column, with each column aggregated across each row of your text using the function supplied in the `aggregation` argument.

The new components will have names that begin with `prefix`, then the name of the aggregation function, then the name of the variable from the embeddings tibble (usually something like "d7"). For example, using the default "word_embeddings" prefix, the "sum" aggregation, and the GloVe embeddings from the textdata package (where the column names are `d1`, `d2`, etc), new columns would be `word_embeddings_sum_d1`, `word_embeddings_sum_d2`, etc.

See Also

[step_tokenize()] [step_lda()]

Examples

Run this code
# NOT RUN {
library(recipes)

embeddings <- tibble(
  tokens = c("the", "cat", "ran"),
  d1 = c(1, 0, 0),
  d2 = c(0, 1, 0),
  d3 = c(0, 0, 1)
)

sample_data <- tibble(
  text = c(
    "The.",
    "The cat.",
    "The cat ran."
  ),
  text_label = c("fragment", "fragment", "sentence")
)

rec <- recipe(text_label ~ ., data = sample_data) %>%
  step_tokenize(text) %>%
  step_word_embeddings(text, embeddings = embeddings)
  
obj <- rec %>%
  prep(training = sample_data)

bake(obj, sample_data)

tidy(rec, number = 2)
tidy(obj, number = 2)
# }

Run the code above in your browser using DataLab