Text corpus analysis functions
This package contains functions for text corpus analysis. To create a text
object, use the read_ndjson or as_text function.
To split text into sentences or token blocks, use text_split.
To specify preprocessing behavior for transforming a text into a
token sequence, use text_filter. To tokenize text
or compute term frequencies, use text_tokens,
term_counts, term_matrix or
term_frame. To search for or count specific terms,
use text_locate, text_count, or
text_detect.
For a complete list of functions, use library(help = "corpus").