tau (version 0.0-21)

util: Preprocessing of Text Documents

Description

Functions for common preprocessing tasks of text documents,

Usage

tokenize(x, lines = FALSE, eol = "\n")
remove_stopwords(x, words, lines = FALSE)

Arguments

x

a vector of character.

eol

the end-of-line character to use.

words

a vector of character (tokens).

lines

assume the components are lines of text.

Value

The same type of object as x.

Details

tokenize is a simple regular expression based parser that splits the components of a vector of character into tokens while protecting infix punctuation. If lines = TRUE assume x was imported with readLines and end-of-line markers need to be added back to the components.

remove_stopwords removes the tokens given in words from x. If lines = FALSE assumes the components of both vectors contain tokens which can be compared using match. Otherwise, assumes the tokens in x are delimited by word boundaries (including infix punctuation) and uses regular expression matching.

Examples

Run this code
# NOT RUN {
txt <- "\"It's almost noon,\" it@dot.net said."
## split
x <- tokenize(txt)
x
## reconstruct
t <- paste(x, collapse = "")
t

if (require("tm", quietly = TRUE)) {
    words <- readLines(system.file("stopwords", "english.dat",
                       package = "tm"))
    remove_stopwords(x, words)
    remove_stopwords(t, words, lines = TRUE)
} else
    remove_stopwords(t, words = c("it", "it's"), lines = TRUE)
# }

Run the code above in your browser using DataCamp Workspace