readr (version 1.0.0)

Tokenizers: Tokenizers.

Description

Explicitly create tokenizer objects. Usually you will not call these function, but will instead use one of the use friendly wrappers like read_csv.

Usage

tokenizer_delim(delim, quote = "\"", na = "NA", quoted_na = TRUE, comment = "", trim_ws = TRUE, escape_double = TRUE, escape_backslash = FALSE)
tokenizer_csv(na = "NA", quoted_na = TRUE, comment = "", trim_ws = TRUE)
tokenizer_tsv(na = "NA", quoted_na = TRUE, comment = "", trim_ws = TRUE)
tokenizer_line(na = character())
tokenizer_log()
tokenizer_fwf(begin, end, na = "NA", comment = "")

Arguments

delim
Single character used to separate fields within a record.
quote
Single character used to quote strings.
na
Character vector of strings to use for missing values. Set this option to character() to indicate no missing values.
quoted_na
Should missing values inside quotes be treated as missing values (the default) or strings.
comment
A string used to identify comments. Any text after the comment characters will be silently ignored.
trim_ws
Should leading and trailing whitespace be trimmed from each field before parsing it?
escape_double
Does the file escape quotes by doubling them? i.e. If this option is TRUE, the value """" represents a single quote, \".
escape_backslash
Does the file use backslashes to escape special characters? This is more general than escape_double as backslashes can be used to escape the delimeter character, the quote characer, or to add special characters like \n.
begin, end
Begin and end offsets for each file. These are C++ offsets so the first column is column zero, and the ranges are [begin, end) (i.e inclusive-exclusive).

Examples

Run this code
tokenizer_csv()

Run the code above in your browser using DataCamp Workspace