
Last chance! 50% off unlimited learning
Sale ends in
This is useful if you need to do some manual munging - you can read the
columns in as character, clean it up with (e.g.) regular expressions and
then let readr take another stab at parsing it. The name is a homage to
the base utils::type.convert()
.
type_convert(
df,
col_types = NULL,
na = c("", "NA"),
trim_ws = TRUE,
locale = default_locale(),
guess_integer = FALSE
)
A data frame.
One of NULL
, a cols()
specification, or
a string. See vignette("readr")
for more details.
If NULL
, column types will be imputed using all rows.
Character vector of strings to interpret as missing values. Set this
option to character()
to indicate no missing values.
Should leading and trailing whitespace (ASCII spaces and tabs) be trimmed from each field before parsing it?
The locale controls defaults that vary from place to place.
The default locale is US-centric (like R), but you can use
locale()
to create your own locale that controls things like
the default time zone, encoding, decimal mark, big mark, and day/month
names.
If TRUE
, guess integer types for whole numbers, if
FALSE
guess numeric type for all numbers.
df <- data.frame(
x = as.character(runif(10)),
y = as.character(sample(10)),
stringsAsFactors = FALSE
)
str(df)
str(type_convert(df))
df <- data.frame(x = c("NA", "10"), stringsAsFactors = FALSE)
str(type_convert(df))
# Type convert can be used to infer types from an entire dataset
# first read the data as character
data <- read_csv(readr_example("mtcars.csv"),
col_types = list(.default = col_character())
)
str(data)
# Then convert it with type_convert
type_convert(data)
Run the code above in your browser using DataLab