Learn R Programming

tokenizers (version 0.1.0)

basic-tokenizers: Basic tokenizers

Description

These functions perform basic tokenization into words, sentences, paragraphs, lines, and characters. The functions can be piped into one another to create at most two levels of tokenization. For instance, one might split a text into paragraphs and then word tokens, or into sentences and then word tokens.

Usage

tokenize_characters(x, lowercase = TRUE, strip_non_alphanum = TRUE,
  simplify = FALSE)

tokenize_words(x, lowercase = TRUE, simplify = FALSE)

tokenize_sentences(x, lowercase = FALSE, strip_punctuation = FALSE, simplify = FALSE)

tokenize_lines(x, simplify = FALSE)

tokenize_paragraphs(x, paragraph_break = "\n\n", simplify = FALSE)

tokenize_regex(x, pattern = "\\s+", simplify = FALSE)

Arguments

x
A character vector or a list of character vectors to be tokenized into n-grams. If x is a character vector, it can be of any length, and each element will be tokenized separately. If x is a list of character vectors, where each e
lowercase
Should the tokens be made lower case? The default value varies by tokenizer; it is only TRUE by default for the tokenizers that you are likely to use last.
strip_non_alphanum
Should punctuation and white space be stripped?
simplify
FALSE by default so that a consistent value is returned regardless of length of input. If TRUE, then an input with a single element will return a character vector of tokens instead of a list.
strip_punctuation
Should punctuation be stripped?
paragraph_break
A string identifying the boundary between two paragraphs.
pattern
A regular expression that defines the split

Value

  • A list of character vectors containing the tokens, with one element in the list for each element that was passed as input. If `simplify = TRUE` and only a single element was passed as input, then the output is a character vector of tokens.

Examples

Run this code
song <-  paste0("How many roads must a man walk down\n",
                "Before you call him a man?\n",
                "How many seas must a white dove sail\n",
                "Before she sleeps in the sand?\n",
                "\n",
                "How many times must the cannonballs fly\n",
                "Before they're forever banned?\n",
                "The answer, my friend, is blowin' in the wind.\n",
                "The answer is blowin' in the wind.\n")

tokenize_words(song)
tokenize_sentences(song)
tokenize_paragraphs(song)
tokenize_lines(song)
tokenize_characters(song)
tokenize_regex("A,B,C,D,E", pattern = ",")

Run the code above in your browser using DataLab