tokenizer

0th

Percentile

Tokenizers

Tokenize a document or character vector.

Usage
MC_tokenizer(x) scan_tokenizer(x)
Arguments
x
A character vector, or an object that can be coerced to character by as.character.
Details

The quality and correctness of a tokenization algorithm highly depends on the context and application scenario. Relevant factors are the language of the underlying text and the notions of whitespace (which can vary with the used encoding and the language) and punctuation marks. Consequently, for superior results you probably need a custom tokenization function.

scan_tokenizer
Relies on scan(..., what = "character").

MC_tokenizer
Implements the functionality of the tokenizer in the MC toolkit (http://www.cs.utexas.edu/users/dml/software/mc/).

Value

A character vector consisting of tokens obtained by tokenization of x.

See Also

getTokenizers to list tokenizers provided by package tm.

Regexp_Tokenizer for tokenizers using regular expressions provided by package NLP.

tokenize for a simple regular expression based tokenizer provided by package tau.

Aliases
  • MC_tokenizer
  • scan_tokenizer
Examples
data("crude")
MC_tokenizer(crude[[1]])
scan_tokenizer(crude[[1]])
strsplit_space_tokenizer <- function(x)
    unlist(strsplit(as.character(x), "[[:space:]]+"))
strsplit_space_tokenizer(crude[[1]])
Documentation reproduced from package tm, version 0.6-2, License: GPL-3

Community examples

Looks like there are no examples yet.